Wednesday, 21 November 2012
Friday, 21 September 2012
Without further ado, here's what's new:
- Multiple choice component:
- No feedback is now a valid scenario. A response will turn green if both feedback and distractor (answer) text has been filled in, orange if the distractor only is filled in (but which is now valid), and red when nothing's been filled in.
- There is now an option to copy feedbacks. This means that if you don't want to have individual feedbacks for all your "wrong" answers, you can fill in one of them and use the Quick Feedback button.
- There is a checkbox that will present a multiple choice component as a pull down list when delivered to the student. This might be useful for questions with lots of distractors!
- You can now click on the edit icon next to a question edit that question in situ within the test.
- There are now options to copy a component, and to move/drag a component into a different position within the question. Dragging is also generally improved (e.g. it will scroll properly if you drag a component towards the button or top of the browser window)
- The much requested option to put feedback immediately after the component (rather than at the end of the question) is now in place. The slider at the top of the component pane toggles between Feedback shown with components and Feedback shown at end of question. The first option means that feedback will appear immediately underneath the component to which it applies. The second option will display all feedback at the bottom of the question.
- Far too many bug fixes to list - but includes workaround for the Firefox issue where scroll bars weren't appearing on the QTIWorks preview.
- Probably many interesting and exciting new bugs introduced :-)
Friday, 14 September 2012
LTIQuizzes consists of an LTI connector (which is intended to be reusable software that can be linked to other systems), my original QTI 2.0 item player, APIS, and a new very basic QTI 2.1 assessment player which will eventually become part of my desktop Common Cartridge viewer, CCplayr. LTIQuizzes served a useful purpose for the QTIDI project, because it allowed us to demonstrate how QTI assessments can be linked into VLEs using LTI before our proper QTI delivery system, QTIWorks, was LTI enabled.
From the teacher's viewpoint LTIQuizzes is very straightforward to use. The teacher creates a new resource in the VLE, and clicks on the resource link which takes them into LTIQuizzes. As they are a teacher in the VLE course they are given the LTI "instructor" role, and so are shown the teacher screen in LTIQuizzes. This allows them to upload a packaged QTI question or test which will be displayed to any students that click on the VLE link into LTIQuizzes. Provided all the questions the quiz are automatically marked (i.e. there are no essay or extended test questions) LTIQuizzes will return a score to the VLE if it supports LTI version 1.1 when the student completes the assessment.
LTIQuizzes is intentionally extremely simple, and there is no way to create an activity without using an LTI link. David has taken a slightly different approach with the LTI connection in QTIWorks, where teachers set up the quiz after logging into QTIWorks directly, and then get the LTI information to configure an activity in Moodle, Blackboard or any other suitable VLE. This approach works very well with the LTI implementations that are now distributed as part of Blackboard 9 and Moodle 2.3, but is awkward with the Moodle 1.9 LTI plug-in, which assumes that administrators rather than teachers will put in most of the LTI information. When I was writing LTIQuizzes s the only LTI enabled VLE that I had access to was Moodle 1.9 with the LTI plug-in wa, so naturally I used an approach which fitted very closely with the approach taken by the developers of that version of Moodle plug-in.
Now the QTIWorks is fully LTI enabled, LTIQuizzes is much less important for our project, however I do intend to do some further work on it as it is sometimes useful to have a very simple system available for experimenting with. I will be doing some refactoring of to the APIS item player over the next few months to make it use generics properly (it's very old), and will probably use it for any experiments I do for ideas that might go into an eventual QTI 2.2. I will also be integrating the item and test player parts of LTIQuizzes into QTIPlayr so that it supports a variant of common cartridge that uses QTI 2.1 entry level assessments. The LTI interface is part of an ongoing project to provide simple lightweight LTI code in Java, C# and PHP that tool providers can use to quickly add LTI support to their software.
Monday, 10 September 2012
Over the last few weeks we have held training workshops for the QTI-PET partners. Two have been face to face and one, mostly for the JISC RSCs, online. We've also had a couple of opportunities to demo the tools and let folks try them out. One of the great lessons learnt in the process is just how difficult it is to get University folk together during the summer! We had 3 attempts at finding a good date for the South workshop, and ended up with a compromise which meant some people could attend but others have the link to the recording of the RSC session, which is at https://sas.elluminate.com/mr.jnlp?suid=M.6207BE476C2071F8C1579CFF32DDA3&sid=2009077.
So, in chronological order we had:
University of Glasgow Learning and Teaching Seminar
Niall presented the projects and their work at the Learning and Teaching Centre Seminar on 14th August. Although the delegates are close colleagues, this was the first time that they had seen these tools demonstrated, and there was some interesting discussion about possible ways of using them.
QTI-PET Workshop North
Held in the Jura computer lab in the University of Glasgow Library on 17th August.
We had 4 partners attending and Sue and Niall presented and helped with the hands-on sessions.
David Reimer from Edinburgh was using Uniqurate's Friendly Mode for language questions - in ancient Hebrew. Lesley Hamilton from the University of the West of Scotland was assembling multi-part questions for medicine and nursing in Friendly Mode. Shazia Ahmed from Maths Support at the University of Glasgow was adding to her collection of questions using Intermediate Mode. Sue demonstrated making small changes in Expert Mode to customise a question already authored in Friendly Mode.
We demonstrated the QTI Works and JAssess renderers and showed a question running in QTI Works within Moodle for the first time. We also had a look at the LTIQuizzes tool running a simple test in Moodle.
Several suggestions for new Uniqurate Components were suggested, including a medicine component which would enable people to author questions which would allow nurses and other health professionals to practise drug calculations dosages. Participants would like to be able to use randomised graphics, including graphs and diagrams, but these features will need further development in both authoring and rendering.
There was a general feeling that the feedback should appear close to the input to which it refers. Making this change will reduce the time available for new components, so we have to decide which components are most essential.
There was some concern about terminology, since, although the attendees were all comfortable with technology, many of their colleagues are not used to using technology directly in teaching. People made these same comments at all three of the partners' workshops, and at the South Workshop, Roger Greenhalgh from Harper Adams University College voluteered to go through the terminology and find the problem areas. We are collecting instances of words and phrases that need to be changed - we try to avoid using QTI terms, but some other words are too obscure for new users, so we are looking for translations, and these edits will be made as soon as possible.
With QTI Works now linked from Uniqurate to show the question running from within the editor, it is now much easier to check that the question does what you expect. This works well in Chrome and Internet Explorer; however, some browsers, particularly Firefox, seem to have difficulty in displaying QTI Works.
The technology behaved well and the participants felt they had had a useful day and that they would use Uniqurate when back in their institutions. We asked them to let us know how they are getting on and to contact us with any difficulties.
QTI-PET Familiarisation Workshop for JISC RSCs
This session was held on 24th August and hosted by RSC Scotland; Sue did the presentation and Grainne Hamilton facilitated the session and collated the questions from attendees. The presentation and the question and answer session generally went well, although there were a few very brief fades in the audio and one would-be participant was unable to connect to the room. The authoring and delivery tools were well-behaved again and some participants were able to try out the tools during the demo.
There was a request for a pairing component, which would construct questions in which, for example, scientists are matched with their theories, or diseases with their symptoms. This is a QTI input type, which can be included in Uniqurate if time allows.
This session was recorded and the URL for the recording is https://sas.elluminate.com/mr.jnlp?suid=M.6207BE476C2071F8C1579CFF32DDA3&sid=2009077. This has been circulated to people who would have liked to go to a workshop but were unable to attend on the dates chosen.
eAssessment Scotland Online Demo
On 28th August we gave an online demonstration to the eAssessment Scotland Online Conference. There were participants from Australia and Asia as well as several from Europe and the UK, and indeed some in the University of Glasgow.
Sue presented and Niall collated questions from the chat stream and answered the more technical ones. We had a 45 minute time slot, which was just long enough to demonstrate all the tools and answer delegates' questions in reasonable detail.
We also had a poster displayed at the eAssessment Scotland Conference on 31st October in Dundee.
QTI-PET Workshop South
This workshop was held in Oxford on 7th September in the University of Oxford Medical Sciences Teaching Centre. We had a seminar room with wifi access, and delegates brought their laptops, so that they had the questions they had created on their hard drive. It was attended by partners from the University of Derby, the University of Oxford, Reaseheath College and Harper Adams University College.
Sue presented and, since Paul Neve was also there, attendees were able to feed back to him directly and in more detail about the Uniqurate design. Participants used the image facilities in the static text component to add pictures to multi part questions using the other components.
We were also able to try the new test construction facilities in Uniqurate, which worked well, with the questions we created during the earlier part of the session being assembled into tests later on.
Sue has been constructing a Moodle course for her class at the University of Glasgow, and we were able to see how the questions will look when used for formative assessment, and to go through the process of adding another question to the course. A copy of Sue's course will be used in future demonstrations to show how the setup process works, and a mock-up course is also available where users can try inserting questions for themselves.
This week we have a workshop at 12:00 on Wednesday 12th September at the ALT-C Conference in Manchester.
Friday, 31 August 2012
You will find that the button on the main menu to create a test is now enabled, and when used, you'll observe that the edit option changes to reflect that you have a test rather than a single question in memory.
Tests are divided into sections, and each section contains a number of questions. For the simplest use case, you can just leave all the default settings alone, click the Add question to section button a few times and select some questions. Give it a title, possibly fill in a bit of explanatory text that the student will see, and then save the test. Your work should then work in any delivery system compatible with QTI 2.1 tests.
However, Uniqurate does go a little further than that. You can have multiple sections in a test, and select random questions from the section so that no two deliveries of the test are the same. You can repeat a question within a section - useful if you want students to try several times with different randomised values (of course, this assumes that the question has randomised values!). You can even deliver different sections to the student depending on how they've done on the previous sections.
The test screen is a little busier than the question authoring screen, so when time permits we'll try and put together some documentation that talks people through the various options. In the meantime, however, the various little icons scattered around the screen will pop up help about the option nearby.
Unfortunately, QTIWorks does not yet have tests up and running. JAssess should run tests fine, however, and as long as you avoid the maths components of UQ, the venerable MathAssessEngine should also run them too (although it does have a bit of a problem displaying the question feedback).
Meanwhile, people, as its author I hereby declare our old test authoring tool Spectatus officially obsolete! Awww. Please join me in a minute's silence for that worthy warhorse :-)
Tuesday, 21 August 2012
We've been fortunate in that around the CAA conference onwards I've been available to work on UQ more or less full time, and as a result we've made a lot of progress in July and August. However, the bad news is that my involvement in Uniqurate ends in November, and along with the fact that once we get into September and teaching commitments rear their ugly head, this quick burn development is going to be curtailed. Short version: there is probably enough time remaining to get test authoring done and perhaps one other component.
So I'm going to put the possible other component to a vote. Feedback from the RSC workshop, plus components that have been mooted in the past, leads us to the following list - numbered for reference, but not in any particular order:
- An "Excel" component - essentially a grid of (random) numbers, some of which will be blank, and the student has to fill in the blanks. The blank values would be derived from the non-blank values.
- A Medicine Component for questions where you have a list of medicines and a list of "strengths" for each - the question would choose randomly a drug and a strength to set up a calculation (but I'd need a little more explanation here :)
- "Fill in the blanks" within text - "blanks" could be either a text field into which the student types, or a pull down menu of choices
- Diagram labelling - drag labels onto the right place of a diagram. Potentially, drag bits of a diagram into the right place of a diagram.
- Option to place feedback alongside the input it refers to
- Confidence-based marking
- A random wrapper - i.e. you define a group of components, and at question run time it chooses a random component within this wrapper to show. Thus you could set up a variety of different components but with related content - e.g. several maths components with slightly different variations of a concept, MCQs with different but related distractors, etc.
Monday, 13 August 2012
However, the reality is that even in non-maths disciplines people still need e-assessment resources that can handle some element of maths. We may not be talking about the complex, computer algebra system-enabled level seen in some of the MathAssess resources - ultimately, my feeling is that creating such rich content will most likely always be the purview of Expert Mode and those individuals who can craft QTI resources within it. Nevertheless, the ability to specify a simple expression and build a question around it spans many, if not most subject areas, and it is not unreasonable on the part of the user community to expect their e-assessment tool to support this.
In the early days of Uniqurate I'd expected much of this need to be served by individual quick maths components that we would identify across a range of disciplines - much as was the case with the quick maths triangle. However, most of the feedback emerging of late was more generic - a need for "more maths" was being articulated without very much in the way of detail. I wanted to scratch this itch but avoid travelling down the same path as Mathqurate. Thus, a general "simple maths" component started to formulate in the back of my mind, and I'm pleased to announce that the release of Uniqurate that I deployed on Friday introduces this new functionality.
You will observe that the components are now divided into three tabs, Text, Maths and Other (as the list of components on the left side was beginning to disappear off of the bottom on my MacBook screen :-) In the Maths tab alongside the existing "triangle" component is our new maths component. When dragged onto a question canvas this results in a fairly simple component that just shows Question Text and Answer, but clicking on one of those fields will pull up a new window that looks like that below:
The screenshot above shows a good example of how this component may be used. Here, we have a question around the old chestnut pi r2. It might be argued that this is actually a trick question - we're specifying our radius in centimeters but actually want the area in millimeters! This lets us showcase the different feedback that can be set up. Note that there is a correct and "non-correct" answer - the latter being what I might refer to as wrong, despite others out there hating such a pejorative term :-) However, you can also add custom feedbacks which will be triggered on alternative answers. Our example above shows only one, but you can have as many as you like.
The example shown involves only one variable (pi doesn't count, it's a constant, along with e) but you can have as many variables as you like - the table underneath the answer field will expand to accommodate them all. Our example is also using explicit values in the min and max fields - but you can use variables or even expressions there. The constraint field also gives you further control, by specifying conditions that must be met when the question runs - you might use this to avoid zero values in variables, for example, or to ensure that a variable value makes sense in the context of the value of another variable.
Even if I do so say so myself, I am very pleased with how this component came out. Initially, I was not expecting to be able to achieve the level of flexibility available in the final version - certainly the use of expressions in the min and max fields was not in my mind at the beginning, and even the use of variables in those fields I considered problematic. The whole thing was opened up courtesy of a chunk of code I created a couple of weeks or so ago that converts a sensible, human-readable maths expression into QTI. So, if one pipes
into this function, one gets
straight back. Regardless of the complexity of the expression - whether it is just a single, explicit value or a six line, bracket-tastic epic, what you will get is a QTI excerpt that can be injected into a question wherever an expression is expected.
I won't go into more detail; this is perhaps a little more techie-oriented than is the norm on this blog (or certainly in posts I write here) but I wanted to mention it for the benefit of other developers who might be confronted with similar needs. There may be applications for this code not only for editor apps but for those who are converting resources in other standards into QTI. Uniqurate (like all of our projects to date) is open source and the code is available on SourceForge. Those of you so inclined, feel free to grab and hack - and be sure to let me know if you do (I'm looking at you, Mr. Pierce :-)
Meanwhile, for the rest of you - please go off and use it, break it, and (inevitably!) come screaming back to me. To test your questions you'll need to use JAssess for now, as support within QTIWorks for the QTI features this component depends on is still being developed. As ever, you'll find JAssess on the QTI Support site http://qti-support.gla.ac.uk/ along with a wealth of other useful resources. At the moment, you will also need to get in touch with Graham Smith to be able to run your own questions in JAssess, but you'll find his email address prominent on the initial JAssess landing page.
Wednesday, 1 August 2012
From a user interface perspective, there are commonalities between this component and the existing multiple choice component - i.e. both of them require the ability to add a variable amount of "answers" to the component. I could have used the existing multiple choice question component as a basis for the UI for the new one, but I decided to adopt a slightly different approach. Rather than having an "add new item" button next to each answer, I used a single "add" button, and introduced the ability to re-order the answers by dragging them. I think this approach is nicer than the existing one on the MCQ component, and if the community agrees I will port it over to the MCQ component.
There are several potential avenues for the next thrust (ooer) of development, and I would be keen to get some thoughts from the community:
Would allow a user to enter a simple expression, i.e. A x B + C / D, possibly some functions too, specify the ranges for each variable, and possibly other parameters such as whether to use integers only etc. I would be open to suggestions on how to represent this in the user interface. My thinking at the moment is that when you drag this component onto the canvas, it fires up a wizard that leads you through entering the expression, then specifying the ranges for each variable etc. Once complete, the wizard would close so that all you have in the canvas/preview is a placeholder represented by the expression, and an edit button. Clicking the edit button would bring the wizard back up.
Select correct answer from pull-down list
It would be easy to implement this as a component in its own right, that would appear as a block within a question. However, I suspect that most people would like to be able to place such components inline within a static block of text. This will be more complex. In terms of how I would represent that from a UI perspective, I would probably arrange it so that one could drag it into place within a static text area. However, that would break the UQ convention with respect to how components are placed onto the canvas. Again, I'm open to suggestions as to how best this should be represented.
I've avoided implementing the ability to compile a group of questions into a test yet, based on the fact that the test capabilities of several of our renderers are still in flux. But this will need doing at some point and now does make a degree of sense, given the fact that UQ now supports a variety of question components, and that there's plenty for you testing types to be getting on with in question-land for the time being!
Any and all feedback will be gratefully received!
Monday, 23 July 2012
Following on from CAA, we put in an appearance at the CETL-MSOR Conference in Sheffield. Although we were not presenting, it was useful to join the meeting and hear about delegates' experiences in assessing mathematics, statistics and operational research.
A recurring theme was the need for assessment resources in statistics in service teaching across a wide variety of disciplines and at all levels. A selection of resources which would begin to address this need will be rebuilt from the CALMAT question collection for a Glasgow University module this autumn.
We were able to add another informal partner to QTI-PET following a lunch-time discussion during the conference. So during the week we welcomed 5 new partners on board, bringing us to a total of 12 QTI-PET partners who will help us to pilot the tools and feed back to us about their experiences. We are arranging a series of training sessions to get everyone up to speed with the tools in time to get some material in front of students this coming semester.
Wednesday, 18 July 2012
Last week's participation in the 2012 International Computer Assisted Assessment (CAA) Conference was a great success for all of the QTI projects. QTI was clearly positioned front and centre throughout the conference, with stream A on Wednesday almost exclusively devoted to it. The closing speaker, Paul Bailey of JISC, went out of his way to stress the importance of QTI and open standards within the field of electronic assessment, and to talk about JISC's commitment to this area of research.
The conference was preceded by a QTI Workshop on the Monday, which was most productive, particularly from the perspective of Uniqurate in that it allowed us to get some new faces using the application - and exposed some new bugs! Consequently, a new version of Uniqurate has been released today that addresses the following bugs:
- Adding an image into a question causes invalid XML to be generated
- Using IE causes invalid XML to be generated (regardless of images!)
- Inconsistent behaviour of watermarked input fields - sometimes the watermark remained when you tried to type into the field, meaning that what you got was a mismash of what you wanted and the watermark text
The version of UQ released today addresses these issues and represents quite a significant change under the hood, which should hopefully resolve any remaining inconsistencies across browsers in the XML that gets produced. As a bonus, while I had the image insertion code open, I also added the ability to not only add an image from a URL on the public internet, but also to push an image from your local hard disk and have it bundled as part of the eventual question content package. (This itself meant a big change under the hood with respect to the way the app handled content package manifests, but you end users don't care about that!)
Upcoming work will probably be to add a component that gets the student to place/drag items into a correct order - this seemed to get some appreciative nods for a "next up" suggestion, but if you have any other ideas, particularly if they're cross disciplinary in nature, fire them at me!
Saturday, 7 July 2012
CAA conference in Southampton next week and focuses on showcasing QTI, as well as lifting the wraps on some of the functionality that people will use to actually deliver and integrate their assessments.
New look and feelThe first thing you'll notice is a new (albeit rather) minimalist look and feel. This is likely to evolve over time as I'm a terrible graphic designer, but I have to say I do like the font I've chosen. (Yay for Google Web Fonts and CSS @font-face!)
Public demo/showcase areaI have moved all of the functionality that doesn't require a full QTIWorks account into a new "Demos" section, which we'll develop over time as a nice public showcase of QTI. Currently you can do 3 things here:
- Try out our bundled samples: I've bundled a selection of QTI 2.1 sample items into this area for people to try out... and this bundle now includes some nice examples from UPMC. I've made a bit of an effort to pick appropriate "delivery settings" for these so that they are showcased in a sensible way, though I have a wee bit more work to do in this respect. I think it looks quite nice, so go and have a play!
- Upload and validate your own item or test: This is the validator from snapshot 1 with a lick of paint, and some minor improvements.
- Upload and run your own item: This lets you upload your own QTI 2.1 assessment item for running, using a number of pre-defined "delivery settings". QTIWorks will automatically validate your item and divert to the validation results page if it finds any errors.
Logged-in areaPotential users of QTIWorks can now request a free account that will give them access to their own logged-in area (which I'm currently calling "the dashboard" but will probably change very soon). This is where people will eventually be able to use the full functionality provided by QTIWorks though, of course, we've still got some way to go in this respect. That said, there's already enough to cheer up a wet weekend in Skegness, including:
- Assessment management: you can now upload and store new assessments in the system, list and view ones you've already uploaded, validate them, update their QTI data and try them out.
- Delivery Settings management: Delivery Settings are a new concept in this snapshot. These allow you to specify exactly how an item (and later a test) should be delivered to candidates. For example, in formative assessment you might want to let the candidate have an unlimited number of tries, access a model solution and to be able to reset (and even re-randomise) the question freely whereas, in summative assessment, you probably want to prevent most of these things happening. Delivery settings provide a way of controlling all of these details. I have a feeling we'll refine this a bit more over the next few iterations, so feedback on the ones we've chosen already would be most welcome.
Monday, 2 July 2012
- The "Friendly" WYSIWYG drag-and-drop mode, targeted at those who just want to create e-assessment content and have no QTI expertise - or inclination to acquire it!
- The "Intermediate" mode, for pre-existing content that couldn't readily be crammed into UQ's "Friendly" mode, enabling these novice users to alter the human-readable aspects and make changes to a question's context;
- The "Expert" mode, for the old hands at QTI who find the support mechanisms of the other two modes confining and obtrusive.
We've also got a new component, the simple text response (i.e. the student types some text, and it's either wrong, or right). Simple is the operative word, but it's a fairly elementary and basic component that - bottom line - is going to be expected by end users even if it's so basic they don't think of articulating it as a requirement!
Friday, 29 June 2012
When I was prototyping our Java Learning Tools Interoperability (LTI 1.1) implementation for QTI-DI I needed a tool to connect it to so that I could try out features in a more meaningful way than a simple duplication of the IMS test suite would achieve. I used APIS, the QTI 2.0 item engine that I developed many years ago as part of JISC's first QTI 2.0 project. I also added on very simple assessment support (APIS only supported individual items) which I had developed for CCPlayr, my desktop Common Cartridge player. Although APIS was superseded by other JISC funded QTI players many years ago, I have continued to occasionally use it to experiment with ideas as we moved towards the final QTI 2.1 release, including the changes we made to ensure that the full functionality of the Common Cartridge was supported.
The combination of my LTI library code, the necessary code to link it to an assessment system and provide enough functionality to fully demonstrate how it could be used, my common cartridge assessment player and the updated APIS item player actually makes a fairly useful quiz system. I briefly demonstrated a development version of the system, LTIQuizzes, at the JISC/CETIS conference in Nottingham, and will be making it available as an extra deliverable from our current projects.
My initial plan for LTIQuizzes was that it would just be an QTI 2.1 Entry Level player - i.e. it would support the questions and tests from the QTI 2.1 version of Common Cartridge, but not much more. APIS does excede the CC requirements by supporting multiple interactions in a single question and flexible response processing and feedback, however I felt that Entry Level+ was a reasonable target. A burst of development activity on APIS over the last week has made me change my mind - I now plan to go well beyond QTI 2.1 Entry Level.
Until recently APIS did not support adaptive items, and had no concept of template items which were not part of the first QTI 2.0 draft. Because LTIQuizzes is the only tool we have that integrates with VLEs using LTI at the moment, I have been trying to extend the range of items that we can demonstrate using it at workshops. I particularly wanted to support the items generated by Uniqurate. Over the last week I have finished support for adaptive items (basically a bug fix) and started to add template item support.
With these improvements to the APIS item player, I now think that rather than labelling LTIQuizzes as an entry-level player I should now aim to make it an intermediate system that is capable of delivering any quiz that can be written with Uniqurate without invoking the expert mode. LTIQuizzes will probably never be able to deliver the full range of items that QTIWorks, JAssess and SToMP II support, however it will be able to deliver a useful subset of them, and may well find a niche market with users who do not need the advanced maths features of the other players, but are looking for a simple and robust system that integrates well with their VLE.
I'll be making LTIQuizzes available for download from the APIS sourceforge site within a few days, once I've integrated the APIS updates and added some installation documentation.
Tuesday, 26 June 2012
You are invited to a pre-conference workshop before the CAA 2012 Conference http://caaconference.co.uk/ at the De Vere Grand Harbour Hotel, Southampton on Monday 9th July 2012, 10:00 – 16:00. This is the announcement that we are circulating:
We have recently been funded by JISC to disseminate the results of a number of recent projects on standards-based Assessment through the QTI-PET project, and as part of this activity we are holding a Workshop on the day before the CAA 2012 conference (http://caaconference.co.uk/ ), at the same venue.
The workshop will include introductions to some new tools being developed under the JISC funded projects QTIDI and Uniqurate:
- A user-friendly editor called Uniqurate, which produces questions conforming to the Question and Test Interoperability specification, QTIv2.1,
- A way of connecting popular VLEs to assessment delivery applications which display QTIv2.1 questions and tests – this connector itself conforms to the Learning Tools Interoperability specification, LTI,
- A simple renderer, which can deliver basic QTIv2.1 questions and tests,
- An updated version of our comprehensive renderer, which can deliver QTIv2.1 questions and tests and also has the capability to handle mathematical expressions.
There will be an opportunity to discuss participants’ assessment needs and to look at the ways these might be addressed using the applications we have available and potential developments which could be part of future projects.
We shall also demonstrate the features of the QTI Support site, created under the QTI-IPS project to help users to get started with QTI. This collection of tools, content and documentation is still growing, and we expect to add more features, prompted by the needs of our partners in the projects who are adopting the tools in their teaching.
Participants in the workshop are most welcome to join us as informal partners in QTI-PET.
Places are limited, so please register to attend the workshop by emailing Sue Milne email@example.com with your details as soon as possible.
Friday, 8 June 2012
The idea of a "halfway-house" mode came back towards the start of the project, and came about after consideration of what we could do with content that was authored in some other way (e.g. another editor, or by hand). The difficulty is that QTI is essentially a programming language for electronic assessment and there is always more than one way to skin the proverbial cat. For example, there are many ways that a multiple choice question could be implemented in QTI - Uniqurate does it one way, but there are many, many others. It would be impossible to map every single possible permutation of QTI that might represent an MCQ onto UQ's appropriate question component. Thus, at an early point it was decided that any content that was not created in UQ would have to be restricted to the XML-based Expert Mode editor.
Some time ago, Wilbert Kraan suggested a layer on top of the Expert mode that would hide certain aspects of the QTI XML, and supplement what was left with a few additional aids. Ultimately you'd still be editing the QTI directly, but it wouldn't seem so "frightening". We took to calling that the "halfway-house" mode.
With the launch of the QTI-PET project and the need to be able to provide a means of adding new context to existing content, this became even more important. We've presented a number of papers and demos on this theme. The tl;dr version is that we've got lots of QTI content, but much of it is written from a generic point of view, and is too dry to be truly engaging. Our colleagues at (say) Harper Adams could use much of it, but their students would react much better if it could have a few subject-specific hooks added just to give it an appropriate context.
Hence, the "halfway-house" or what we're now calling Intermediate mode. If you switch to expert mode and load a question, you'll noice a little icon at the top right of the screen. Click this, and all of the XML will be hidden apart from the human-readable parts.
Thursday, 31 May 2012
This iteration of work has focused mainly on the delivery of single assessment items to candidates, joining up various bits of pipework that control this process and adding in some new rendering features and options that we'll need later for test rendering (and are quite useful in their own right too).
I'm afraid you'll still have to make do with playing around with the sample items that have already been loaded into the system for the time being. However, you'll probably be pleased to hear that the next work iteration will add in the first round of functionality for getting your own assessments into the system and doing fun things with them. (At last!) To make the wait a bit easier, I have gone through the existing samples and selected a reasonable set of options for how they should be delivered, which makes them much more fun to play with, as well as more illuminating to people who are less familiar with QTI.
Key changes in this release (1.0-DEV8)
- Assessment item rendering now incorporates the following new features:
- Display of a model solution (via the QTI <correctResponse>).
- The ability to reset a session back to the state it was in immediately after the last run of template processing, which effectively clears all candidate input back to the original state, but leaves randomly-chosen things intact. Existing "reinit" functionality has been made a bit clearer.
- An explicit "closed" state, which is either entered explicitly by the candidate, or when an adaptive item becomes complete, or when the number of attempts hits your chosen value of maxAttempts. I think this will need a bit more work, as it was never implemented in QTIEngine or MathAssessEngine.
- A new "playback" feature lets the candidate step through every interaction they made with the item, which is a possible way of implementing the allowReview concept from test rendering.
- A number of "knobs" are now available for controlling how you want to render a single item, including:
- maxAttempts (as seen in tests)
- author mode on/off (controls whether authoring hints are shown)
- a simple prompt to be shown at the top of the question (similar to rubrics in tests, but simpler)
- restrictions on what the candidate can do, including:
- close a session explicitly when interacting
- play a session back when closed
- reinitialize a session when interacting or when closed
- reset a session when interacting or when closed
- see a model solution when interacting or when closed
- see the result XML
- see the item XML source
- I have gone through all of the existing samples and set the above "knobs" to values that show each in their best light. For example, some suit a very formative "try as many times as you like" approach with rich feedback, others are more austere so are better displayed in a more rigid way. This should make the samples much more useful for people trying them out.
- The authoring debugging information has been improved, and now shows bad responses (e.g. string submitted to a float), and invalid responses (e.g. wrong number of choices made).
- There's now a rich database structure underpinning all of this, which records everything the candidate does and the changing QTI state during this process. This is currently used to implement the "playback" functionality, and will prove invaluable for analysing result data when the system delivers real tests.
- The HTTP calls that control the typical delivery of a "candidate session" via a browser are now as RESTful as you would pragmatically expect in this type of scenario. A more formal RESTful web service API will be trivial to do from this, but I'm going to hold off until anyone actually needs it.
- HTTP responses have been improved so that they always include the Content-Length headers. Responses that don't change also send cache-friendly HTTP headers such as ETag and Cache-Control.
Short term development roadmap
- Next up is adding in the web-facing functionality for creating your own assignments into the system, uploading content into them, validating them and trying them out (using the various "knobs" I listed above).
- After that it's finally time to implement the entry and exit points for candidates who get sent to QTIWorks to do an assessment. This is where LTI will come in.
- Then... tests. At last!
Thursday, 3 May 2012
It's fairly vestigial at the moment, so don't get too excited! If you're a geek, you'll quickly notice that there is no source code there at the moment. I'm presently developing QTI Works within my own github area, and plan to push things over to SourceForge once we get past a few more milestones. Until then, you can get the source code at:
Please don't expect any API stability this point. There's also no build documentation yet, which probably doesn't help...!
Wednesday, 2 May 2012
With this in mind, I have started to feed Graham Smith's excellent collection of QTI examples into QTI Works, and plan to continue this process over the next few months while it takes shape.
These examples will be bundled into development snapshots of QTI Works for you to play around. So far, I have assimilated 3 sets of examples
The IMS standard examples (and a few extra bits) Examples demonstrating the MathAssess QTI extensions A small set of example items from language testing (German & Russian) It provides me with some sample data for running automated "integration tests" on the JQTI+ / QTI Works software. (For example, I feed them all of the code performing the various bits of QTI logic, such as reading in XML, validating the data models, writing out XML, running template processing, running response processing etc. This is invaluable for finding and fixing bugs, and making sure we can handle "real world" examples properly.) As well as being useful for integration testing, they also help with so-called "regression testing", which helps make sure that I don't break things inadvertently during the development process. These examples have been around for a few years now, so this process is a good way of doing some QA on them and making sure they're right up to date with the QTI specification.
Monday, 16 April 2012
The conference was held at Imperial College London, in several different buildings. The parallel sessions were organised in subject strands. Our presentation was in the Maths, Stats and OR strand in the last set of papers.
A common theme in the conference was the need for maths to be set in context for each discipline. It is a problem compounded by the students' surprise in many subjecct areas on finding that their course includes maths. Speaker after speaker reported that students engage better with maths in context, and there was a lively discussion about the most effective way of supporting students: is it better to have a subject specialist teach the maths they need for their course, or should a mathematician tech the maths? The conclusion was that there should be several people contributing to this teaching, and that if the support is removed from the location where the problem was presented, the student is likely to be less embarrassed and may seek help more readily.
Our paper demonstrated the facilities in the QTI tools for contextualising questions, and also featured the first appearance of the LTI connector embedded in an institutional Moodle - the University of Glasgow's learning and Teaching Moodle instance.
LTIQuizzes isn't really intended for production use, but it does show what is possible. For now storage is to files rather than a database, and the QTI engine is APIS, which only supports a subset of QTI. Database storage, full LTI 1.1 support as well as the LTI 1.0 extensions for VLE persistence and course membership will all be supported very soon, and the LTI section of the code can easilly be reused by other applications. (I'm also developing PHP and C# vaersions of the LTI classes.)
Thursday, 5 April 2012
One of the key bits of work done so far has been a redesign of JQTI, which is the Java software library for "doing" QTI stuff that Southampton developed a few years ago. I've previously blogged about why I thought this is necessary (see http://davemckain.blogspot.co.uk/2011/06/refactoring-jqti-jqti.html) and the result of this is coming together under the not-very-original name of JQTI+. This refactoring work is now almost complete, with the exception of QTI tests, which JQTI never quite implemented fully and will be revisited in a few months.
On top of JQTI+, I'm building the replacement for MathAssessEngine that will become the main technical deliverable of this project. MathAssessEngine, which was based on the original QTIEngine, is also going to be torn apart and redesigned so that it can do all of the things it now needs to do, and do them all really well.
To reflect the scope of the work we're doing, we've decided to give the replacement for MathAssessEngine a completely new name and, after a couple of months of riotously bad naming attempts, we've decided to call it QTI Works.
I will deploy regular public development snapshots of QTI Works while it takes shape over the next few months, which you will be able to find at:
If you remember to wear a hard hat, you can go in and try the first development snapshot now. This showcases the brand new (and vastly improved) QTI 2.1 validation functionality in JQTI+, as well as demonstrating the newly-refactored rendering and delivery of QTI 2.1 assessment items. (You'll have to make do with a selection of sample items for the time being... the functionality that will allow you to upload, validate and try out your own items is still written on the back of some envelopes. This will turn into real code that you can play with during the next development iteration. Hopefully!)
Tuesday, 20 March 2012
QTIDI and its sister project Uniqurate were represented at the CETIS Conference in Nottingham in February 2012. We demonstrated
- The Learning Tools Interoperability (LTI) Connector running a Common Cartridge test;
- The Uniqurate question authoring tool, which also has facilities for creating a contant package containing a question and its associated media and/or stylesheets;
- Interoperability between our tools and those of colleagues from Germany, France and Korea as well as the UK, using the latest version of MathAssessEngine, which is currently morphing into QTI Works.
Here's a quick roundup from Niall and David about what’s going on in QTIDI:
Niall: “I have been working on getting a good IMS LTI example working in Java. (The IMS example software is written in PHP.) A basic LTI 1.0 test with consumer (LMS) and tool components is almost complete. I'll be writing some developer documentation for LTI containing this code as well as C# and PHP example code. I have also (with help from Dave) set up a MathAssessEngine NetBeans project on my Laptop. I have also been taking part in the IMS QTI working group meetings.”
David: “Most of the work so far has been continued refactoring of JQTI to make it more sensible, and an initial webapp framework for SonOf(QTI|MA)Engine. So, all this first iteration is going to do is provide a "validation service". This will let you upload a standalone item XML or item/test Content Package, which will then be fully validated and will generate a summary report. Once that's in place, next iteration will be refactoring the rendering of standalone items, as well as a first cut of the REST API for that, which will be the way in for LTI.”
Note that the original MathAssessEngine will be replaced during this project, so the appearance and the way things get uploaded will change. The test mode is needing improvements, so there will be some changes there too. This means, of course, that feedback from clients will be more useful at the moment in pointing out "would like to see" suggestions (and the reasons for them), which can then be matched up with the specification to make sure things work the way they should, rather than as a debugging exercise.