Background:
These are the notes I took from my conversation with [~msikora]: director of evaluation @ the Detroit Institute of Arts. Has been @ DIA for awhile, worked in publications & has facilitated instructional media projects. In 2002 around the time of the DIA renovation Matt began doing evaluation work.
- what is evaluation? - sort of an umbrella term that includes visitor research; staff training & development; gathering research from other museums
- sometimes contractors are used in visitor research, depends on the project
- visitor research: some questions asked: how do we address the needs of visitors? what is audience's understanding of the topic being explored in the exhibition? are the planned visitors outcomes being achieved?
Visitor research & Engage
In general there is uncertainty about how visitor research will happen in relation to Engage. In Jan 2009 DIA thought of applying to an ILMS National Leadership grant which would have helped the DIA bring in outside visitor research consultants
How visitor research intersects with the exhibition preparation process:
Front-end evaluation: research that happens at the beginning of exhibition / product development. Goal is to understand users in terms of their attitudes towards subject matter, & the topic of the exhibition that the museum has in mind. Front-end research happens alongside the conceptual development of the exhibition.
Formative evaluation: when prototypes are available, gather users to test them. Usually happens with 10-20 people. Certain aspects of the exhibit are mocked-up. E.g. drafts of labels, test pieces. The hope is that by doing some tests on a subset of the entire exhibition that problems can be identified early on. Is a cost-saving measure too.
Summative evaluation: Occurs once product/exhibit is finished. How are visitors using the exhibitions; what are their reactions; are the outcomes of the exhibition being met.
Uncertainties:
- With respect to Engage authoring tools, how will the DIA harness them? What exactly will the tools do? It's difficult to have a concrete understanding of the project - something that's needed in order for evaluation plans to be developed. Don't have a concrete understanding about what prototypes will be available
- Unsure how existing gallery interpretive models (e.g. response stations, multiple perspectives labels) will "hook" into Engage products. This is another area where evaluation will have to be done.
- Lack of understanding, so far, about what authoring tools will be created, what their imagined functionality will be. So currently it's difficult to imagine what the possibilities are.