Tasks

List of tasks
WP1
WP2
WP3
WP4
WP5
WP6
WP7
WP8
Task details

T7.1 - User evaluation and feedback adjustments

Stat date Nov 1, 2010
End date Mar 1, 2013

The ATLAS platform and services will be piloted and validated by different sets of users. Each partner who participates in this task takes the responsibility to involve between 4 and 50 users (for a total of at least 150 users) from the user groups defined on p.11 of DoW (refer to Section B.3.2.a for a description of the Living Labs methodology). All events and activities related to user evaluation are coordinated by ITD who are also responsible for analyzing user feedback and producing a final report with the findings. Tetracom is responsible for any changes to the software platform that arise as a response to user feedback.

The adopted iterative methodology will permit a small sample of the end-user community to take part in the development process through formative evaluation, consequently shaping the platform to their current and emerging needs. In addition the task will collate evaluation data from the community on a larger scale for the validation of the ATLAS platform. The findings on experimentation and validation will be enriched by the three different experimental and validation environment settings: laboratory, organization (based on test scenarios) and community (demonstration).

The frameworks of reference to be used for evaluation of the ATLAS platform, i‑Publisher, i‑Librarian, and EUDocLib, is the model of DeLone and McClean (1992) and its updated version (2003). This model is now widely accepted for the evaluation of information systems and includes the following evaluation criteria:

  1. System quality criteria, such as adaptability, availability, reliability, response time and usability.
  2. Information quality criteria, which we are not evaluating, but which will clearly be a factor in operational systems (and, thus, may influence the design process).
  3. Use criteria, that is, nature of use, navigation patterns, number of site visits and number of transactions executed.
  4. User satisfaction is clearly a key factor in ATLAS’ success at the demonstration stage. At this point we would envisage conducting user surveys during demonstration sessions to determine the level of satisfaction with the systems.
  5. Individual impact, defined as “the effect of information on the behavior of the recipient.” To evaluate this aspect fully, however, would require ATLAS outputs to be used in real systems.
  6. Organizational impact. The criteria proposed by DeLone and McClean (2003) under this heading relate mainly to e-commerce systems and are not appropriate to ATLAS. However, we can suggest other appropriate criteria, such as effectiveness, efficiency, cost benefit and, crucially in the case of ATLAS, willingness to adopt.
  7. Essential properties. These are not defined by the system success criteria, but rather are derived from the main goals of the objective served by the ATLAS technologies, i.e. long-term preservation. They will include authenticity, integrity, access restriction, data placement, data display and records migration.