Stat date | Nov 1, 2010 |
End date | Mar 1, 2013 |
The ATLAS platform and services will be piloted and validated by different sets of users. Each partner who participates in this task takes the responsibility to involve between 4 and 50 users (for a total of at least 150 users) from the user groups defined on p.11 of DoW (refer to Section B.3.2.a for a description of the Living Labs methodology). All events and activities related to user evaluation are coordinated by ITD who are also responsible for analyzing user feedback and producing a final report with the findings. Tetracom is responsible for any changes to the software platform that arise as a response to user feedback.
The adopted iterative methodology will permit a small sample of the end-user community to take part in the development process through formative evaluation, consequently shaping the platform to their current and emerging needs. In addition the task will collate evaluation data from the community on a larger scale for the validation of the ATLAS platform. The findings on experimentation and validation will be enriched by the three different experimental and validation environment settings: laboratory, organization (based on test scenarios) and community (demonstration).
The frameworks of reference to be used for evaluation of the ATLAS platform, i‑Publisher, i‑Librarian, and EUDocLib, is the model of DeLone and McClean (1992) and its updated version (2003). This model is now widely accepted for the evaluation of information systems and includes the following evaluation criteria: