W15 ShortCOM paper - good practice guidelines

Lead - Jim Ascough
Contributors - Amgad Elmahdi, Brian S. McIntosh, Keith Matthews, Jenifer Ticehurst, Julien Harou, Andrea Sulis

(As I've commented in the challenges section I think we need a consistent, simple conceptual framework to structure this section, and in fact the whole document. Maybe something like the categories development - adoption and use - context would do. We could be more sophisticated and use something like the socio-technical systems distinction between actors - technology - task - (work and institutional) structure but I am not sure it is needed. The alternative would be to utilise the categories of do's and don't induced from our workshop discussions more directly in this section. We'd have to be careful to link them to the 'challenges' section so we don't rather arbitrarily recommend a set of actions that don't correspond to the challenges identified. I'll see what I can do when it is my turn to edit - Brian)

EDSS success, like that of any collective human endeavor, is hard to pinpoint. There are no ready formulas but rather a mix of institutional, technical and human factors which effect the ability of an EDSS to shape policy. This section on good EDSS development practice considers EDSS as policy tools, as scientific and engineering tools, and as software tools. This section is intended for both developers and potential EDSS users.

1. EDSS as a policy tool

EDSS are usually deployed within a policy development process as a policy simulation tool. The predictive tool is used to investigate possible effects of proposed policy and engineering actions. Performance can be evaluated around three pillars: salience, credibility and legitimacy.

  • Salience - is the information relevant to the decision being made?
  • Credibility - is the output of the DSS seen as correct (particularly for historical examples, or local cases), do the DSS developers have a personal track record with the stakeholders?
  • Legitimacy - is the process of using the DSS seen as being fair - is it inclusive and allowing participation or is it a crude use of science to force through change?

In the attempts to apply EDSS to evaluate the impacts of different policies or actions, we must consider the interests and opinion of numerous stakeholders. The success (or failure) of these attempts is often hard to judge. Many possible frameworks have been presented to judge success. Following Goeller (1988), in this paper we suggest that the salience of EDSS in practice can be judged by three measures, each ones applies to a different time period: the analysis, the application and the outcome successes. The analysis success reflects how the analysis was performed and presented to the stakeholders. The analysts must take care of client satisfaction but a success only based on this measure will be transitional. In our experience the client may be not satisfied because he is learning something he does not want to accept. The application success is concerned with how the EDSS was used in the decision making process and by whom. Also, the extent to which the information from the EDSS application influences the decision making process is a good indicator of success in model application. As the identification of problems is a key aspect of their solution, the application success can support the framing of problems to better identify problems worthy to be solved (Hermans, 2005). The outcome success gives information on how the use of analytic results from the EDSS affects the system planning and management and if information resulting from the model improve the problem.

Julien: can the following points be integrated into or linked to the 3 categories above?
- Ensure committment of intended end-users. This may be inherent in being policy-practice led research, and relevant DSS. However, if, throughout the project, the intended use/purpose changes (because government organisations are always restructuring), if the project was funded directly by the intended users then would they be more committed to work with you, then if the funding came from elsewhere. Similarly, if the intended end-users are paid to contribute their time and effort in feedback for the model development, then automatically they dont have to justify robbing time/money from another project to contribute to yours.
- It is imperative to recognise the difference between science-led and policy- or practice-led research. Knowledge and expertise generated in science-led research is the vital underpinning capacity for the latter but translation from one to the other is a process that not all academics are interested in or good at.
- Have reasonable expectation of the DSS. Do not over promise.
- Be realistic but also careful to ensure that all the effects of the DSS are captured. DSS on their own do not change the world - DSS as part of social processes might but don't be surprised if the change occurs long after the DSS has been shelved. The DSS may, however, have had a key role in starting a process of decision making.
- Be clear on what success is - how defined, how measured.
- DSS can be transitory - if intended for knowledge transfer then they are typically subsumed in to practitioners tacit knowledge. This is success.

2. EDSS as a scientific and engineering tool

Does the EDSS represent what's really going on? this is the cornerstone of EDSS credibility. We structure this section around 2 questions: does the tool adequately represent the physical and human side of the problem? On the physical side: Can the EDSS reliably represent the system's physics, ecology and/or engineering operations? has the model at the tool's core been properly calibrated and validated against observed data? On the human side: can the EDSS represent policy options and institutional drivers such social processes, property rights, markets, and regulations?

Other important considerations include:
Better data = better DSS. When choosing to invest do so in the data that underpin a DSS rather than the web-interface. The wrong answer is still the wrong answer no matter how good it looks.

The planning, management and operation of real-world systems is perhaps more complex than what developers have been able to put on their EDSS. The reason is that we do not understand sufficiently the multiple interdependent physical, ecological, engineering, social and political processes that govern the behavior of such systems. EDSS of real-world systems are always a simplified representation of those systems. Applying these approximations of reality in a way that could contribute to everyone's improved understanding is the cornestore of EDSS credibility.

3. EDSS as a software tool

Software implementation will impact system usefulness and longevity. Important factors range from the the institutions producing the EDSS, licensing arrangements and software structure and approach.

The institution or group producing EDSS may influence long-term results. Will the EDSS creator be able to assure long-term maintenance and support for the product? How will the creator be influenced by personnel changes in its ranks. When an EDSS depends on 1 or a few charismatic developpers, it is less robust to personnel changes. Larger credible institutions or groups have an advantage in guaranteeing long-term maintenance and support. Until the 1990's EDSS were often written by scientists and engineers who also built the mathematical model. Today successful EDSS are made by teams of mathematical model builders, professional software engineeers and user-interface/communication/group dynamics specialists.

Typically EDSS support long-term environmental system management rather than short-term engineering design. Shelf-life and long-term maintenance and support are important EDSS characteristics. Software licencing plays an important role by providing a contractual description of the user-creator relationship; options include open-source, freeware and proprietary licenses. Because it fosters long-term thinking, licensing is essential for software sustainability. Two recommended arrangements include proprietary and open-source systems. Proprietary systems link software to livelihoods providing strong incentive for long-term quality. The evidence for this is strong: many successful EDSS involve software and consulting services by one or more contractors, often linked to training services. Recognised research institutions may also license proprietary EDSS. A different and emerging trend is EDSS development by international consortia of researchers working on open-source projects. These groups use online code management systems to organise code contributions into a licensed software or library. For example HydroPlatform (Harou et al. 2010) allows network-based environmental management models to connect to a shared open-source user-interface and database.

Finally software structure may be a factor in success - how easy is the code to maintain, expand and link to other codes? EDSS are complex and success requires professional software development practices. Several software approaches are available including dedicated decision support systems (DSS), modeling frameworks, commercial modeling systems, and stand-alone software (reviewed by Harou et al. 2010). The boundaries between these approaches sometimes overlap but they reflect the wide range of options. Expanding a EDSS built with a modelling framework (Argent 2004, Rizolli and Argent 2006) will likely be simpler than expanding a single purpose or stand-alone EDSS not designed for modularity. The ability of an EDSS to accommodate modelling interface protocols such as OpenMI (Gregersen et al. 2007) is a crucial factor determining their ability to link models and build integrated tools.