W15 ShortCOM paper - key challenges

Key challenges faced by EDSS developers

Leads - Martin Volk, Sven Lautenbach, Dagmar Haase
Contributors - Brian S. McIntosh, Amgad Elmahdi, Keith Matthews, Jenifer Ticehurst, Stefan Sieber, Nigel Quinn, Ayalew Kassahun

Challenges that EDSS developers have to face are manifold. They can be grouped into four main problems:

  1. problems relating to the quantity, quality and appropriateness of end-user involvement in the development of EDSS,
  2. adoption (I have changed the word 'implementation' to adoption to avoid any confusion with software implementation - Brian) problems due lack of support from the management of the organisation of the end users,
  3. challenges of the development process - (this seems the weakest section. I'd be tempted to remove it and focus on the other ones - Brian),
  4. challenges related to the business model of an EDSS.(funding and longevity are the most significant category of challenge in my view and so be addressed first - Brian)

I think we need to impose a clearer conceptual model of EDSS development and use here to make sense of and better order the categories of challenge faced. Something simple based on life cycle e.g. development process (e.g. problems with end-user involvement) - adoption and use processes - with the addition of context (e.g. business model and environment) would do probably - I'll introduce something in the introduction section that we can maybe all follow in our writing - Brian

There are a number of definitions of Environmental Decision Support System in the published literature – however most coalesce around the idea of using computers to support decision making. Simon (1977) – one of the early pioneers of decision science – described a systematic decision making process which he broke down into four sequential phases: intelligence, design, choice and implementation. In the intelligence phase the problem to be solved is defined, the ownership of the problem is established and essential descriptive background data is collected. In the design phase a simulation model or analogue that emulates the behaviour of the system is developed – the model is validated during this phase and decision options are identified. The latter include the various actions the decision maker will need to choose between. The choice phase follows which identifies a proposed solution identified by the model – though this solution is not necessarily a solution to the original problem. The fourth phase is the implementation phase which applies the decision support system or modelling framework to solve the original problem.

Hence when EDSS tools fail it is important to identify the phase within which specific problems begin to surface (McIntosh et al. 2009) (I don't think that the model of phases articulated by Simon is the right model to represent and identify failure points in processes of innovation or adoption of new technology by individuals or organisations - for this we need to reference either the modified version of Simon's model that is Rogers diffusion model, or something from the socio-technical literature like Lyytinen and Newman's PSIC model. I will insert something when it comes to my point for editing the complete document - Brian). This is more difficult than it sounds because the same myopic thinking that led to failure of the system can prevent accurate pinpointing of the seeds of failure. This can also be a problem of perception – where two analysts, presented with the same basic facts, nevertheless reach different conclusions (References needed here).

1) Problems with end-user involvement

The first category of the challenges to be discussed here is that relating to the quantity and quality of end-user involvement. This challenge is – as discussed during the Ottawa workshop – at least twofold:

Firstly, EDSS developers often face a missing strategy for the participatory or stakeholder process during EDSS development. McIntosh et al. (2009) demand both in their review paper understanding user needs and to work collaboratively. However, Newham et al. (2006) state that not having an exit strategy for the participatory process was not in place at the beginning of the project was majorly constraining the success of the EDSS development. And, what is more, even if the EDSS development is stakeholder-driven/participatory it is difficult to determine the effect in the short term which provokes problems with future participatory projects in the area of interest. Newham et al. (2006) also found focussing the participatory activities on a single organisation, while very useful for administrative and other reasons, declined the overall impact of the project. To avoid a missing stakeholder strategy a permanent contact office, well established and ideally implemented party in targeted organisation (e.g. contact office), assures needed reliability. Particularly personal contacts at both front end of the target organisation and the providing research centre increase the probability of adoption of new EDSS (Sieber et al. 2010).

Secondly, it is still a challenge to precisely identify end-users of the EDSS under development. This problem again has two aspects: On the one side, many EDSS have been developed in research projects funded by international or large national research foundations and not by potential end-users or decision-makers. This challenges to precisely identify who is the end-user is (or could be) and what the requirements are. Volk et al. (2010) discuss the EDSS MedAction (Van Delden et al. 2007), funded by the EU and not by users. The latter had to be found as part of the project, which turned out to be very time consuming. (Potential) end-users were less committed to contribute to the development because there was no initial need from their organisations to use the EDSS as it is in organisations that ask and pay for it.

Contrary experience was made in the EU-project PLUREL: Here, potential users were involved in the development of the integrated Impact Assessment Tool (iIAT) in terms of conceptualisation and content. However, as the iIAT has to cover the entire EU, regional specifics could not enter the database and the tool GUI. The potential user – particularly at the regional scale – would use the tool for evaluating the land use impacts of their specific projects ranging from infrastructure planning to new housing sites or the implementation of greenbelts. Therefore, in many cases, the iIAT is not specific enough. However, stakeholders and potential users highlighted the cross-European comparability as an advantage which can be achieved only by keeping the tool less specific (Haase et al. 2010). Another positive experience comes from the United States where a DSS that focus on assisting resource managers to accomplish specific tasks have been welcomed by users who see a means to influence the development process (Twery et al. 2005). Contrariwise, Volk et al. (2010) report that while it was possible to get feedback from the management level, contacts to the staff that was actually to use the EDSS were harder to establish (see Lautenbach et al. 2009 for the Elbe-EDSS). Since policy analysts and assistants are the persons which will use (or will not use) the EDSS it is rather important to include their requirements and needs during the design of the tools. Failing to provide ways of supporting their daily work in an easy way might lead to a refusal to use the EDSS later on (Volk et al. 2010). On the other hand, the practical matter of eliciting the relevant information from stakeholders to develop a useful and robust EDSS is rarely adequate (Reference). This weakness contributes to the high rate of failed EDSS (Reference). Stakeholders sometimes have difficulty articulating the decisions they are called upon to make and cannot definitely describe the bounds of the decision space within which they operate. The EDSS developer is challenged by having to understand the system he/she is attempting to simulate to the same degree as the stakeholder.

Summarising, there are two aspects EDSS developers have to consider in terms of stakeholder involvement and participation: a good strategy for the participatory process which includes a way to identify the real end-users of the tool.

What should in any case accompany the participatory process is to consider the appropriateness of both information and tool: Information must be applicable to the type of problem, the level of institutional capacity and technical ability of the practitioners. If capacity is lacking, special efforts will be needed to facilitate information exchange. Internet based information is key, but where it is not easily accessible alternatives must be used. In addition, accessibility in another key as building on current capacity of practitioners is more promising than requiring major upgrades in individual or organisational or technical ability. Finally, equity should be given so that information exchange should respect cultural needs and gender issues, and take care not to discriminate against users or providers because of their remote locations (Global Water Partnership at http://www.gwptoolbox.org/).

2) Adoption problems

Concerning adoption problems one has first of all to define the type of DSS with respect to ease of adoption:

  1. EDSS for operational management (e.g. reservoir operation),
  2. EDSS for creating planning (e.g. irrigation scheduling, nature conservation),
  3. EDSS for strategic decision making (e.g. making design choices for flood protection or land use; Reference here).

Whereas EDSS of the first two types are often easily adopted given the DSS is of required quality, the last one is often used to give insights into decision-making (options). (all types of EDSS are to inform decision making and may have the same or different adoption success rates - the issue isn't type of EDSS it is the extent to which use of the EDSS requires the using organisation to change how it does what it does, or even what it does. I will add something on this when it is my turn to edit the full draft - Brian)

All implementation starts with identifying a real demand – that is when existence is at stake. Volk et al. (2010) expect the best chances for a successful adaptation in an organization in case of new challenges. This has been the case for their presented examples of the Elbe-DSS (Lautenbach et al., 2009) as well as for FLUMAGIS (Volk et al., 2007; Volk et al., 2008). The European Water Framework Directive (EC 2000), NATURA 2000 network (EU 2008), Total Daily Maximum Loads (TDML) (NRC 2001) or other conventions are legal frameworks which initiate a change of management system in a certain period. That means that the development of the Elbe-DSS as well as of FLUMAGIS was driven by the needs of decision makers managing the Elbe River Basin or the Ems River Basin, respectively. Another challenge could be also a scale transfer or change; e.g. evaluating the effects of land consumption by settlement and commerce is a typical national issue in Europe. Very recently, the EU got interested to get a broader picture of the impacts of dispersed settlement development in urban Europe – here a new tool is a more suitable approach than adapting any of the nationally-optimised systems although the content is the same (Nilsson et al. 2008). Finally, a neglecting of identifying the stakeholder demand could end up in tools that impose a structure/rigour on the problem that may not be desired by the users which is obviously a result of non- or late involvement of respective stakeholders and real users.

A specific challenge is the “market making” of an EDSS. Research found (References here) that the adoption of an EDSS is more likely if it focuses on accomplishing a task that a potential user is already required to do, and using that DSS makes the task easier. Adding new analyses can be successful but only after a user sees the DSS as making the required work more manageable (Reference here). If the development of the DSS is not end-user driven, a substantial effort in promotion, demonstration and documentation is required. In most cases, substantial effort goes in implementation core modelling activities and little effort is done in making the DSS easy and understandable.

Important seems to be a fully operational model system that convinces potential users with an added value compared to traditional already implemented systems. A functioning prototype for demonstration is key strategy for convincing operational performance. Feasibility for new model exercises should be demonstrated. At the same time DSS developers face the risk to develop 'mock up'-prototypes, which can never meet expectations since implemented systems cannot cope with demonstrated (i) Spatial, time and thematic integration, (ii) Technical performance and promised system advancements as well as (iii) Quality assurance-features and promised data integration and model results. Concerning the latter, there is a clear need for integration and robustness of constituent models as integrated models represent the reality more closely than using separate models. Integration should be also pushed as most scientists believe that it will better accept (Reference here).

Publicly developed EDSS – public goods - are often more difficult to get them bought. Moreover, in many projects there are missing ideas of cost effectiveness and the definition of success. There are significant transaction costs with big user base but little credit in career terms. Therefore, there is an opportunity for (both the developer and the researcher) science-private sector partnerships: the prototype is participatory developed by scientists and potential users, the final tool including the GUI is done by a private firm specialised in tool development. But this is also a challenge as long as good blue prints (best practice cases) are missing.
Reliability and credibility are still big challenges for each EDSS (development). Based on experiences by Volk et al. (2010) with their EDSS that all link different models and databases they found that improvement is needed particularly regarding the treatment of uncertainty because of sparse data availability, necessity for simplification in simulation models, ambiguity due to coupling of different models and tools and calibration, validation and sensitivity analysis. Related to that it is another important point how EDSS developers communicate to users that the results or recommendations “produced” by the DSS are mostly uncertain, but that they are nevertheless useful tools. Therefore, EDSS should be used for forecasting as part of ex-ante analysis rather than reporting on / analysing past performances.

Reluctance of many model-based EDSS developers to admit that their models simply do not deliver results that are reliable enough to be useful. Issues here include the lack or cost of localised input variables, the spatial heterogeneity of variables adding to uncertainty and the failure of models to represent key processes in an adequate way. To take a contrary view the legal argument is perhaps overblown. Negligence in professional services is a pretty well established concept so if the development is rigorous and uses best available knowledge/data and has the appropriate levels of disclaimers this should not be the sticking point. The argument of EDSS not being reliable enough is often made by those who do not want to use a tool at all. Setting standards that cannot be met is the usual defence of vested interests when faced with rigorous analysis. The model may not be good enough but even if it is it still may not be accepted.

Tools (having fixed rules and codings) do (or better do) not reflect the judgements made by professionals – standards are bent on occasions based on experience or other factors (may be less good than the EDSS?). However, EDSS usually do not represent the entire decision space - but the space they do represent is done so rigorously. Expectations on a EDSS should be that they make the decisions (e.g. in a regulatory sense) but that they support (inform, advise etc.). EDSS do make explicit and transparent when rules/standards are set aside (bent is a euphemism that is unhelpful). They show the degree and for whom the rules have been waived. The tools require this flexibility to be available but also that it be reported transparently, especially when as is often the case there is a dispute about the basis on which decisions are being made. Adding the audit trail to the EDSS is a necessary part of professionalising/main-streaming DSS use.
???

Generic issue of “replacing” rather than supporting expertise
Not fitting with the process. McCown’s idea of agency – critical factor is not to replace or make redundant the skills of the decision maker – esp. if this is what gives then kudos, farmer or policy maker – neither is going to admit that they need the tool to do something they “should” be able to do

3) Issues in the development process

An important outstanding issue is the dynamic link between models simulating a continuous process (like growth of biomass) and models simulating transitions (like changes in natural vegetation types). The latter introduces shocks in the system, which sometimes occur in reality (e.g. planting of trees or logging), but are often artefacts of the modelling approach. In the latter case a dynamic coupling from these models to the continuous models might give unrealistic behaviour in the continuous models at the point in time where the transition takes place.

From a technical point of view, further evaluation of DSS platforms as tools for the delivery of the modelling system to stakeholders is required (Newham et al., 2006; Volk et al., 2010). This is not without its costs as stakeholder interaction requires considerable effort and the resources needed were underestimated in the planning of the case study. In the case that DSS are developed within a project, the duration of such project is mostly around three years. Such three-year terms of the case studies is the bare minimum required to gain the trust necessary to effectively undertake participatory activities. The reliance on short term funding from external sources has potential to constrain establishing such relationships and may lead to disillusionment among stakeholders, particularly at the local level.”

The requirement analysis might have high transaction costs among developers to build a
common understanding. Ineffective discussions on terminology and priority setting on requirements might occur. The decision power on the model design is relatively high, if no superior instance steers representative decisions reflecting developers’, users’ and decision makers’ option in a balanced way. A developer team across institutes weak the effectiveness of the non-standardised MRA as in-house policy generally works against a harmonised common interpretation and understanding.

4) Issues with business model of making EDSS

Cost effectiveness is a major concern for DSS, and DSS longevity should be ensured by a business plan. Considerable sums are spent by governments (e.g, EU, US and Australia) for projects that develop DSS, and may have some success - but not always more money results in more success. Defining costs is relatively easy but effectiveness is bedevilled with disagreements over what were the expectations of the DSS in terms of outcomes (were they realistic see Intro) and how are they to be defined. Outcomes evaluation assesses the effects of EMS use on values, attitudes and behaviours in the context of a decision making or management process. However, evaluation of wider outcomes from the use of EMS confronts at least five challenges.

  1. Intangibility of many such outcomes. This can lead to evaluations that just measure that which is easily quantifiable rather than more important outcomes that are difficult or costly to measure. An example of an intangible outcome is that the social capital built up between individuals may be strengthened through stakeholder direct participation in an EMS development phase, which may enhance their adaptive capacity far more than the outputs from the EMS itself (Burton et al., 2007; Kaljonen, 2006).
  2. Long term and cumulative nature of change may also be incompatible with short-term project-based evaluations of outcomes. The project team may have moved on to new research long before the outcomes can be measured, complicating the process of understanding the EMS intervention and its consequences (Blackstock et al., 2007).
  3. Even where outcomes can be measured, can one be sure that success is an outcome of using the EMS? Establishing causality is very difficult when processes and participants cannot be directly replicated or controlled (Robson, 1993). This difficulty is compounded for interventions in the complex coupled social-ecological systems that are the typical focus of EMS (Bellamy et al., 2001). However, attribution of perception that a change was stimulated by an intervention is possible.
  4. Even where outcomes can be distinguished with clear causality then there will be considerable disagreement between stakeholders and interested publics on the relative importance of individual outcomes, particularly when these outcomes are non-commensurable. Outcomes other than those expected (or even desired) by the EMS funder will be discounted in favour of those that ‘fit’ (Fischer, 2000).
  5. There needs to be recognition of the limits on the influence that science generated “expert” outputs can have within a plurality of expertise derived in different ways (Stilgoe et al., 2006). Research based tools such as EMS are only one source of influence and usually not one of the more important (Solesbury, 2001).

With regard to continuity and valorisation, there is the question, whether there a habit mentality that people use a DSS once or twice for the immediate purpose it was developed for and then shelve it, and only those who can see beyond the immediate application continue to use it. If so, then should more time be spent on training 'outside the square' thinking. Or does it all come back to the simplicity and usability of the tool?

There are two variants of guaranteeing continuity: commercialisation or open source development.

Another successful option is to take an approach of training local staff in how to update the tools themselves, as well as training local private consultants in how to maintain the tools, the idea being that local organisations can employ the consultants to maintain the tools if they do not have the inhouse capacity to do so. (See the Coastal Lake Assessment and Management (CLAM) website at www.CLAM.net.au). The idea was that if funding was being sort for research, and the existing DSS should be updated using the research findings, then the funding application could account for the additional costs of updating the DSS. However, although we have had continued interest from the end-users in updating their CLAM tools two years after the initial projects were completed, I dont know whether the small pots of money that they (Local governments) use to patch together their research interests have been big enough to cover the consultant costs.

References

Bellamy et al., 2001
Blackstock et al., 2007
Burton et al., 2007
EC 20
Fischer, 2000
Global Water Partnership at http://www.gwptoolbox.org/
Haase et al. 2010
Kaljonen, 2006
Lautenbach et al. 2009
McIntosh et al. 2009
Newham et al. 2006
Nilsson et al. 2008
NRC 200
Robson, 1993
Sieber et al. 2010
Solesbury, 2001
Stilgoe et al., 2006
Twery et al. 2005
Van Delden et al. 2007
Volk et al. 2010

"OLD VERSION"

Key challenges faced by EDSS developers

Leads - Martin Volk, Sven Lautenbach, Dagmar Haase
Contributors - Brian S. McIntosh, Amgad Elmahdi, Keith Matthews, Jenifer Ticehurst, Stefan Sieber, Nigel Quinn, Serena Chen

Challenges that EDSS developers have to face are manifold. The challenges can be grouped into problems of identifying and accessing potential end users, questions of acceptance and demand from the management of the organisation of the potential end users, challenges of the development process and challenges related to the business model and costs of an EDSS.

There are a number of definitions of Environmental Decision Support System in the published literature – however most coalesce around the idea of using computers to support decision making. Simon (1977) – one of the early pioneers of decision science – described a systematic decision making process which he broke down into four sequential phases : intelligence, design, choice and implementation. In the intelligence phase the problem to be solved is defined, the ownership of the problem is established and essential descriptive background data is collected. In design phase a simulation model or analog that emulates the behavior of the system is developed – the model is validated during this phase and a decision framework identified. The decision framework describes the various actions the decision maker will need to choose between. The choice phase follows which identifies a proposed solution identified by the model or model framework – though this solution is not necessarily a solution to the original problem. The fourth phase is the implementation phase which applies the decision support system or modeling framework to solve the original problem.

Hence when EDSS tools fail it is important to identify the phase within which specific problems begin to surface. This is more difficult than it sounds because the same myopic thinking that led to failure of the system can prevent accurate pinpointing of the seeds of failure. This can also be a problem of perception – where two analysts, presented with the same basic facts, nevertheless reach different conclusions.

User identification and access

Missing Strategy for the participatory process
Newham et al. (2006) stated that one feature constraining the success of their case study was that an exit strategy for the participatory process was not in place at the beginning of the project and needed to be developed. It is difficult to determine the effect, particularly in the short term, but potential problems with future participatory projects in the catchment. Newham et al. (2006) found also that the focus of the participatory activities on a single organisation, while very useful for administrative and other reasons, lessened the overall impact of their project as a result of a significant restructure of management organisations in the catchment. Participatory activities less focused on a single organisation would have assisted to overcome this problem.

Permanent and continuous contact office that is well established and ideally is implemented party in targeted organisation (e.g. contact bureau) assures needed reliability. Personal contacts at both front end of the target organisation and the providing research centre that last over a long time increases the probability of adoption of new model system (Sieber et al. 2010).

• Not being able to precisely identify users
Many DSS have been developed in research projects funded by the European Union or by large national research foundations. This makes it difficult to precisely identify who the user is or could be. Volk et al. (2010) showed for instance the example of the DSS MedAction (Van Delden et al., 2007). MedAction was funded by the EU and not by the (potential) users. That means that users had to be found as part of the project, which turned out to be very time consuming. Compared to other DSS development processes (Rutledge et al., 2009), (potential) users were less committed to contribute to the development, because there was no initial need from their organisations to use the DSS as there is in organisations that ask and pay for DSS development. Another experience was made in the EU-project PLUREL: Here, stakeholders and potential users were involved in the development of the integrated Impact Assessment Tool (iIAT) in terms of conceptualisation and content. However, as the iIAT has to cover the entire EU, regional specifics could not enter the database and the tool GUI. The potential user – particularly at the regional scale – would use the tool for evaluating the land use impacts of their specific projects ranging from infrastructure planning to new housing sites or the implementation of greenbelts. Therefore, in many cases, the iIAT is not specific enough. However, stakeholders and potential users highlighted the cross-European comparability as an advantage which can be achieved only by keeping the tool less specific (Haase et al., 2010).

In the United States DSS that focus on assisting resource managers to accomplish specific tasks have been welcomed by users who see a means to influence the development process. (Twery et al. 2005)

The practical matter of eliciting the relevant information from stakeholders to develop a useful and robust EDSS is rarely adequate and this weakness contributes to the high rate of failed EDSS. Stakeholders sometimes have difficulty articulating the decisions they are called upon to make and cannot definitely describe the bounds of the decision space within which they operate. The EDSS developer is challenged by having to understand the system he/she is attempting to simulate to the same degree as the stakeholder.

Policy clients very difficult to define

Problem - DSS are often used by qualified policy analysts and assistants but not by the policy- / decision-makers themselves (Reference).

One of the challenges reported by Volk et al. 2010 for the Elbe-DSS (Lautenbach et al. 2009) was the problem of incorporating the real end users of the EDSS. While it was possible to get feedback from the management level, contacts to the staff that was actually intended to use the EDSS were harder to establish. Since policy analysts and assistants are the persons who will use (or will not use) the EDSS, it is rather important to include their requirements and needs during the design of the tools. Failing to provide ways of helping them to achieve their day to day work in a easy way might lead to a refusal to use the EDSS later on.

Web based platforms get round many technical issues but it still not clear who is to use tools

Acceptance and demand (management)

Having a new (more integrated) model accepted when a set of separate models are already used but in a non-integrated manner

Role of market making – not all DSS are wanted (initially)
Adoption of DSS is more likely if it focuses on accomplishing a task that a potential user is already required to do, and using that DSS makes the task easier. Adding new analyses can be successful but only after a user sees the DSS as making the required work more manageable.

Real demand when existence is at stake (comment from Jim – more likely to go for DSS when business threatened “existence” – e.g Oz dryland farmers in the big drought)

Volk et al. (2010) expect the best chances for a successful adaptation in an organization in case of new challenges (because old habits die hard and so do existing solutions for common tasks). This has been the case for their presented examples of the Elbe-DSS (Lautenbach et al., 2009) as well as for FLUMAGIS (Volk et al., 2007; Volk et al., 2008). The European Water Framework Directive (EC 2000), NATURA 2000 network (EU 2008), Total Daily Maximum Loads (TDML) (NRC 2001) or other conventions are legal frameworks which initiate a change of management system in a certain period. That means that the development of the Elbe-DSS as well as of FLUMAGIS was driven by the needs of German decision makers managing the Elbe River Basin or the Ems River Basin respectively.
A challenge as argued before could be also a scale transfer or change; e.g. evaluating the effects of land consumption by settlement and commerce is a typical national issue in Europe. Very recently, the EU got interested to get a broader picture of the impacts of dispersed settlement development in urban Europe – here a new tool is a more suitable approach than adapting any of the nationally-optimised systems although the content is the same (Reference).

The tools impose a structure/rigour on the problem that may not be desired by the users

The cognitive style of individual users is considered to be a factor that influences their acceptance of the DSS and its underlying formal decision-making approach. It has been found that people with ‘intuitive’ and ‘feeling’ cognitive styles were less comfortable with applying a DSS, compared to people with ‘sensing’ and ‘thinking’ cognitive styles who prefer established routine and logical and objective analysis (Lu et al. 2001). A similar challenge is faced at an organisation level, where if the DSS does not align with the current practice of users it is unlikely to be accepted unless organisational change occurs.

Prototype too successful – over promised then disappointment

Important seems to be a fully operational model system that convinces potential users with an added value compared to traditional already implemented systems. A functioning prototype for demonstration is key strategy for convincing operational performance. Feasibility for new model exercises should be demonstrated. At the same time DSS developers face the risk to develop 'mock up'-prototypes, which can never meet expectations since implemented systems cannot cope with demonstrated (i) Spatial, time and thematic integration, (ii) Technical performance and promised system advancements as well as (iii) Quality assurance-features and primised data integration and model results.

Public goods are more difficult to get buy in for DSS
Links with cost effectiveness and the definition of success.

Significant transaction costs with big user base but little credit in career terms (for the developer and researcher)  opportunity for science-private sector partnerships: prototype is participatory developed by scientists and potential users, the final tool inclusing the GUI is done by a private firm specialised in tool development.

Generic issue of “replacing” rather than supporting expertise. Not fitting with the process. McCown’s idea of agency – critical factor is not to replace or make redundant the skills of the decision maker – esp if this is what gives then kudos, farmer or policy maker – neither is going to admit that they need the tool to do something they “should” be able to do
Reliability and credibility

Using DSS for forecasting as part of ex-ante analysis rather than reporting on / analysing past performance

Having the output of a DSS taken into court as part of a legal case for particular policy action

Robustness of constituent models – some just not good enough

How to deal with uncertainties in and of DSS’s?
On the basis of the experiences of Volk et al. (2010) with their presented DSS that all link different models and databases they found that improvement is needed particularly regarding the treatment of uncertainty because of sparse data availability, necessity for simplification in simulation models, ambiguity due to coupling of different models and tools and calibration, validation and sensitivity analysis.

Another important point is how DSS developers communicate to users that the results or recommendations “produced” by the DSS are mostly uncertain, but that they are nevertheless useful tools.

Reliability – demanding users – especially if legal. Tension of simplicity vs quality?
Reluctance of many model-based DSS developers to admit that their models simply do not deliver results that are reliable enough to be useful. Issues here include the lack or cost of localised input variables, the spatial heterogeneity of variables adding to uncertainty and the failure of models to represent key processes in an adequate way. To take a contrary view the legal argument is perhaps over blown. Negligence in professional services is a pretty well established concept so if the development is rigorous and uses best available knowledge/data and has the appropriate levels of disclaimers this should not be the sticking point. The argument of DSS not being reliable enough is often made by those who do not want to use a tool at all. Setting standards that cannot be met is the usual defence of vested interests when faced with rigorous analysis. The model may not be good enough but even if it is it still may not be accepted.

Tools (having fixed rules and codings) do (or better do) not reflect the judgements made by professionals – standards are bent on occasions based on experience or other factors (may be less good than the DSS??)
DSS usually do not represent the entire decision space - but the space they do represent is done so rigorously. Expectations of DSS should not be that they make the decisions (e.g. in a regulatory sense) but that they support (inform, advise ect). DSS do make explicit and transparent when rules/standards are set aside (bent is a euphemism that is unhelpful). They show the degree and for whom the rules have been waived. The tools require this flexibility to be available but also that it be reported transparently, especially when as is often the case there is a dispute about the basis on which decisions are being made. Adding the audit trail to the DSS is a necessary part of professionalising/main-streaming DSS use.

Development process

Scope creep

Systems integration difficulties – can’t show the tool in operation
An important outstanding issue is the dynamic link between models simulating a continuous process (like growth of biomass) and models simulating transitions (like changes in natural vegetation types). The latter introduces shocks in the system, which sometimes occur in reality (e.g. planting of trees or logging), but are often artefacts of the modelling approach. In the latter case a dynamic coupling from these models to the continuous models might give unrealistic behaviour in the continuous models at the point in time where the transition takes place.

Business model and costs

Ensuring DSS longevity through a business plan

Worries of cost effectiveness
Cost effectiveness is a major concern for DSS. Considerable sums spent (EU, US and Australia) some success - but not always more money = more success. Defining costs is relatively easy but effectiveness is bedevilled with disagreements over what were the expectations of the DSS in terms of Outcomes (were they realistic see Intro) and how are they to be defined.

Outcomes evaluation assesses the effects of EMS use on values, attitudes and behaviours in the context of a decision making or management process. However, evaluation of wider outcomes from the use of EMS confronts at least five challenges.

1) Intangibility of many such outcomes. This can lead to evaluations that just measure that which is easily quantifiable rather than more important outcomes that are difficult or costly to measure. An example of an intangible outcome is that the social capital built up between individuals may be strengthened through stakeholder direct participation in an EMS development phase, which may enhance their adaptive capacity far more than the outputs from the EMS itself (Burton et al., 2007; Kaljonen, 2006).

2) Long term and cumulative nature of change may also be incompatible with short-term project-based evaluations of outcomes. The project team may have moved on to new research long before the outcomes can be measured, complicating the process of understanding the EMS intervention and its consequences (Blackstock et al., 2007).

3) Even where outcomes can be measured, can one be sure that success is an outcome of using the EMS? Establishing causality is very difficult when processes and participants cannot be directly replicated or controlled (Robson, 1993). This difficulty is compounded for interventions in the complex coupled social-ecological systems that are the typical focus of EMS (Bellamy et al., 2001). However, attribution of perception that a change was stimulated by an intervention is possible.

4) Even where outcomes can be distinguished with clear causality then there will be considerable disagreement between stakeholders and interested publics on the relative importance of individual outcomes, particularly when these outcomes are non-commensurable. Outcomes other than those expected (or even desired) by the EMS funder will be discounted in favour of those that ‘fit’ (Fischer, 2000)

5) There needs to be a recognition of the limits on the influence that science generated “expert” outputs can have within a plurality of expertise derived in different ways (Stilgoe et al., 2006). Research based tools such as EMS are only one source of influence and usually not one of the more important (Solesbury, 2001).

Requirements analysis not properly implemented / budgeted for in the project (academic bidding means that project is scoped in the proposal)
From a technical point of view, further evaluation of DSS platforms as tools for the delivery of the modelling system to stakeholders is required (Newham et al., 2006; Volk et al., 2010). This is not without its costs as stakeholder interaction requires considerable effort and the resources needed were underestimated in the planning of the case study. In the case that DSS are developed within a project, the duration of such project is mostly around three years. Such three-year terms of the case studies is the bare minimum required to gain the trust necessary to effectively undertake participatory activities. The reliance on short term funding from external sources has potential to constrain establishing such relationships and may lead to disillusionment among stakeholders, particularly at the local level.”

The requirement analysis might have high transaction costs among developers to build a common understanding. Ineffective discussions on terminology and priority setting on requirements might occur. The decision power on the model design is relatively high, if no superior instance steers representative decisions reflecting developers’, users’ and decision makers’ option in a balanced way. A developer team across institutes weak the effectiveness of the non-standardised MRA as in-house policy generally works against a harmonised common interpretation and understanding.

Dangers of over supply of DSS - lots of academics looking for an application or a justification for their work

Continuity of tools dependent on key staff, how to continue beyond those staff members?

Is there a habit mentality that people use a DSS once or twice for the immediate purpose it was developed for and then shelve it, and only those who can see beyond the immediate application continue to use it? If so, then should more time be spent on training 'outside the square' thinking. Or does it all come back to the simplicity and useability of the tool?

Someone (governments?) need to pay at some point for the DSS to survive / develop

We have taken an approach of training local staff in how to update the tools themselves, as well as training local private consultants in how to maintain the tools, the idea being that local organisations can employ the consultants to maintain the tools if they do not have the inhouse capacity to do so. (See the Coastal Lake Assessment and Management (CLAM) website at www.CLAM.net.au). The idea was that if funding was being sort for research, and the existing DSS should be updated using the research findings, then the funding application could account for the additional costs of updating the DSS. However, although we have had continued interest from the end-users in updating their CLAM tools two years after the initial projects were completed, I dont know whether the small pots of money that they (Local governments) use to patch together their research interests have been big enough to cover the consultant costs.

References

have been moved to the related wiki page