upa - home page JUS - Journal of usability studies
An international peer-reviewed journal

A System in the Wild: Deploying a Two Player Arm Rehabilitation System for Children With Cerebral Palsy in a School Environment

Raymond Holt, Andrew Weightman, Justin Gallagher, Nick Preston, Martin Levesley, Mark Mon-Williams, and Bipinchandra Bhakta

Journal of Usability Studies, Volume 8, Issue 4, August 2013, pp. 111 - 126

Article Contents


This section reports the results of three sources of data collected: the error logs and callouts that indicated problems that occurred during system deployment, the amount of usage the systems received, and the qualitative feedback from participating staff and children.

Errors and Callouts

No calls were received from the schools during any of the deployments. However, when we collected the systems after the first deployment, teachers from all four schools reported that the systems were prone to crashing when first being initialized and that this cut into the amount of therapeutic time available. Teachers had struggled to remedy the situation—even turning the PC off and back on again did not correct the problem. Because of these issues, in some cases, there was insufficient time to carry out the required sessions. On other occasions, the systems appeared to work without difficulty. Analysis of the error logs confirmed this, with a large number of Transmission Control Protocol (TCP) errors, indicating that the PC was not communicating with the cRIO properly. The same problem was reported by all four schools in the first deployment, though the systems continued to be used and no schools contacted the team for technical support during this period. This situation caused us some frustration, as the systems could have received greater usage during this deployment had the schools contacted us for assistance when problems first arose. However, we were also conscious that the staff participating in the project received no direct benefit from their participation and were doing us a great service by taking the time to use the system at all. Time taken to report problems would be over and above this, and we felt that it was important to be sensitive to the demands we were making on teacher’s time, particularly given the need to maintain good relationships with them for this and future projects. For this reason we did not take issue with the staff for the lack of contact, but simply reminded them at the start of the second deployment that they could contact us if problems persisted.

We eventually identified the problem as being the system timing out during initialization if the trigger buttons for the joysticks were not pressed within 30 seconds of the system starting up. As the cRIO was powered separately to the PC, turning the PC off would not correct this problem because the cRIO would continue to report an initialization error. Turning the system off at the main power supply, however, would cause the cRIO to reset the next time it was turned on so the system could initialize properly. This explained the intermittent nature of the problem: If a teacher initialized the system within 30 seconds of it first being turned on, it would run without difficulty; if not, it wouldn’t run again until it had been switched off and back on at the main power supply, which often wasn’t until the next day when the system was due to be used again.

After the first phase of the first deployment, we addressed this problem by extending the time-out period to two minutes and offering more explicit instructions on the importance of initializing the joysticks within this period. But the problem still persisted into the second phase of the first deployment—though this was again only reported at the end of the deployment. Accordingly, we changed the time-out period to infinite so that teachers could initialize the joystick at their leisure. This solution resolved the problem for the rest of the study.

Usage Data

The system was able to log the length of each game and the date and time on which it took place. We only analyzed data for children with CP, as these were the target users of the system. The system recorded the raw amount of therapeutic play that the children received. The numbers presented here do not include time for setting up or waiting between games.

No allowance was made for days where the schools were closed due to, for example, bad weather, teacher training, public holidays, or where class activities such as school visits might have rendered it impossible or impractical to use the system. The aim was to provide a realistic snapshot of how the system would be used, and these are all factors that would affect the system in real usage. However, one issue worth noting is that Child 9 and 10 (both at School H) did not use the system at all in their second phase. The participating teachers reported that this was due to pressure in preparing for Standard Assessment Test (SAT) exams and was therefore a function of the time of year at which the deployment took place, rather than any disinclination towards the single-player mode. While this represents a genuine usage pattern, it is important to bear this in mind when comparing overall usage in single versus multiplayer modes, as it will make the gap between the two appear larger than it might have been had the deployment occurred at a different time of year. It is also worth noting that in Schools C and H, which each had two children participating in the study, the children with CP did not play against each other in two-player mode, but instead they played with other friends. Because the combination of frequency and intensity of exercise is important, rather than the raw amount of exercise undertaken, Table 1 reports the number of days on which the system was used in its single player and two player deployments (from a possible 20 days in each case, as no allowance was made for school closures due to, for example, bad weather or teacher training days). Table 1 also reports the mean length of sessions when it was used in each deployment. Note that where the system was used multiple times in the same day, the session was counted as usage for one day, with session lengths treated separately because multiple short sessions in a day do not equate to a single full-length session. Session length therefore covers the total time spent carrying out therapeutic play in a single sitting with the system.

Table 1. Amount of Usage per Child

Table 1

Taken across both single and multiplayer deployments, the system was used on a mean of 22.4 days (SD=8.49 days) of a possible 40 (the sum of the 20 possible days in the single-player deployment and the 20 possible days in the two-player deployment). The mean session length was 11.0 minutes (SD=6.76 minutes). Table 1 shows how the session lengths varied dramatically from child to child, for example, Child 3, 4, and 6 used the system more than other children. When looking at all of the children, there was little difference between single- and two-player usage. The system was used on a mean of 9.55 days (SD=5.68 days) in single-player mode compared with a mean of 12.8 days (SD=4.47 days) in two-player mode, both from a possible 20 days. The mean session lengths were 12.0 minutes (SD=7.51 min) for the single-player mode and 10.4 minutes (SD=6.09 min) for the two-player mode. If the results for Child 9 and 10 are excluded on the basis that their lack of play in single-player mode is an artifact of timetabling problems rather than a disinclination towards the mode itself, this becomes a mean of 11.6 days (SD = 3.53 days) in single-player mode and 13.67 days (SD = 4.27 days) in two-player mode, with mean session lengths of 12.0 minutes (SD=7.51 min) in single-player mode and 10.4 minutes (SD =6.09 min) in two-player mode. Such small differences are not meaningful in a therapeutic sense, but do indicate that providing a two-player option did not present a barrier to using the system, despite the requirement for children who did not need therapy taking the time to participate.

In all cases, the average times were substantially below the target 30 minutes per day. In fact, 30 minute sessions were only ever achieved on seven occasions and then only by three of the children (three times by Child 3, three times by Child 8, and once by Child 1). The lowest target that might be considered acceptable (twenty minutes, three times a week) was not achieved by any of the children.

The most common time for usage was during the morning registration period (the first 15 minutes of the 9 a.m. to 10 a.m. slot) and then afternoon registration (the last 15 minutes of the 1 p.m. to 2 p.m. slot). It is worth noting that the children whose usage was concentrated around lesson times (Children 3, 4, and 11) or lunchtime (Child 6) were those who achieved the longest sessions.

Qualitative Feedback

As scheduling interviews with staff at the participating schools was difficult, given the huge demands already existing on their time, to gather teacher and child feedback we used questionnaires that were comprised of open questions on the system’s usage. Through these questionnaires we asked if there were any barriers to its use and asked for suggestions for future improvements. However, getting staff to complete and return the questionnaires and ensuring that the children returned the feedback questionnaires proved extremely problematic.

We sent the questionnaires by mail one week before the end of the deployment. All children who used the system and any staff who supervised their use were instructed to complete the questionnaires and leave them with the system for collection. We telephoned the schools to explain this process when the questionnaires were mailed to the school. We telephoned again the day before the systems and questionnaires were to be collected, as a reminder to complete the questionnaires. However, it was rarely possible to speak with the teachers themselves, and we were obliged to leave messages with secretarial staff to be passed on to the teachers.

Despite our prompt and original information letters and briefing sessions emphasizing the importance of this feedback to evaluating the system, very few schools provided any of the questionnaires when we collected the system. We made follow up phone calls and sent additional copies of the questionnaires with pre-paid, self-addressed envelopes to the schools that had not completed them. We received a total of 21 questionnaires from the staff through a combination of direct collection and by mail with at least one from each school and in some cases two or three. It was not always clear which school each questionnaire came from, particularly when the questionnaires were returned by mail. We received 14 questionnaires from the children, although the usage data recorded on the systems showed that at least 38 children used the system during the deployment. Because the questionnaires were often not labeled correctly (such as not including a name or not including their surname), we were not always able to determine which questionnaire came from children with CP. As the original questionnaires had been intended to be left with the system, we assumed that we would know which school they had come from and could label them accordingly. This problem would have been prevented if the questionnaires had been pre-labeled with the participating school. Similarly, asking children to provide forename and surname would at least have allowed us to identify the children with CP. Initial return rates might also have been higher if a simple set of closed questions had been used (open answer questions require significantly longer to complete). Although the value of this would have been limited—we had already gathered usage data from the system, and our interest was in what had facilitated or prevented that usage.

Feedback comments from the school staff responsible for the system focused consistently on the size of the system and ease of setup. One school indicated that maneuverability was not important, as the size of the system meant that it could not be moved anywhere else. The other schools indicated that being able to move the system around easily was very important, as it needed to be used in different classrooms or moved out of the way when not in use. Every school was able to find space for the system. No school felt it was too big, but all agreed that a system any larger than this would be untenable.

Ease of setup was also an issue. While all schools praised the ease with which the system could be plugged in and booted up, those in the first deployment reported that the system sometimes failed to initialize properly the first time it was switched on each day—as had already been identified through the callouts made during the deployments. While they were always able to resolve this by restarting the system, School B in particular pointed out that this cut significantly into the short periods of time that they were able to find for use.

All schools indicated that the major problem in the systems’ usage was in relation to timetabling. Only School C took regular time out of lessons to utilize the system. For the other schools, the most convenient time for using the system was during morning or afternoon registration. This immediately restricted playing sessions to no more than 10 or 15 minutes, allowing for time to let the children settle down to their task. Other schools took time out of lessons only where it could be conveniently accommodated, such as during times where a story was being read to the whole class. Schools E, F, G, and H, in particular, all indicated that exam preparation cut into the amount of time available for therapy, particularly in the second phase of deployment (the multiplayer phase for Schools E and F and single players for School G and H). Of these, only School H failed to find any time for the system’s use during this phase, as indicated by their usage data.

Finally, several respondents mentioned the need for more games to maintain interest, citing the concern that a small library of games might maintain interest for a limited deployment (such as the eight total weeks of this study), but beyond that boredom may become an issue.

These comments confirmed our initial findings from working with teachers when developing the system. Maneuverability, ease of setup, and a small footprint are all important for a school environment. This demonstrates the merit of the revised concept of a self-contained unit over the initial notion of up to six independent joysticks, but also the importance of engaging with users to properly understand their needs. Nevertheless, it also demonstrates the inherent difficulty in finding time for utilizing such a system in the school day. There was a fundamental mismatch between the schools’ goal of educational achievement and our goal of delivering the required therapy. Participating teachers were all sympathetic to the need for children to undertake therapeutic exercise, but as they will ultimately be assessed on the academic performance of their pupils, there was an active disincentive to make class time available for therapeutic use.


Previous | Next