Test It and They Might Come: Improving the Uptake of Digital Tools in Transparency and Accountability Initiatives

Christopher Wilson and Indra de Lanerolle*

Abstract

Information and communications technologies (ICTs) and data play an increasingly visible role in transparency and accountability initiatives (TAIs). There has been little research on how the selection of ICT tools influences the success of these initiatives. This article reports on research into TAI tool selection processes in South Africa and Kenya. Findings suggest that in many cases, tools are chosen with only limited testing of their appropriateness for the intended users in the intended contexts, despite widespread recognition among practitioners, funders and researchers that this carries significant efficiency and sustainability risks. We conclude by suggesting a strategy for increasing investment and effort in tool selection, in order to conserve overall project resources and minimise the risk of failure.

1 Introduction

Information and communications technologies (ICTs) and data play an increasingly visible role in transparency and accountability (T&A) programming. This might involve using social media to track parliamentary performance, mobile phones to conduct satisfaction surveys on public service delivery, reporting websites to document corruption, or radio to promote and facilitate political debate. Here, such processes are referred to as Technology for Transparency and Accountability Initiatives (T4TAIs).
T4TAIs have received significant attention in both the academic literature, and the grey literature of professional reports and programming guides (Ahmed, Scheepers and Stockdale 2014; Avila et al. 2010; Fox 2015; Gaventa and McGee 2013; Joshi 2013; McGee and Carlitz 2013; Slater 2014). This body of work presents examples of effective use of technology in the service of T&A objectives, but also raises concerns about the effectiveness and impact of T4TAIs. Some of the explanations for lack of success include a failure to sufficiently understand the users of technological tools (McGee and Carlitz 2013), failure to account for contextual factors (Joshi 2013), and limited technical capacities or investment in project management (Slater 2014). The way that tool selection processes influence the success of T4TAIs is seldom addressed. This article presents some initial findings from a research project which aims to help fill that gap.1

The process of tool selection, and the dynamics influencing it, are important. Our research confirms that a very wide range of tools are used by T4TAIs including: social media platforms; off-the-shelf (OTS) software platforms such as Ushahidi or Frontline SMS, which can be applied in dramatically different contexts with little customisation; paid subscriptions to cloud services for managing data; hardware such as tablets for conducting surveys; or mobile apps and web interfaces which T4TAIs build or commission from the bottom up. The selection of the right tool for the job influences the implementation of accountability programming, and its potential for influencing T&A. The complicated processes through which these tools are selected
involve different types of decisions (What kind of tool? Build or buy? Open source or proprietary?) and different models of decision-making (Top-down or bottom-up? With what degree of research, consultation or preparation?). Understanding tool selection processes is important for understanding how T4TAIs function, and the conditions that are associated with positive programming outcomes.

To better understand these dynamics, surveys and interviews were conducted with T4TAIs in Kenya and South Africa during 2014 and 2015. Findings suggest that in many cases, tools are chosen with only limited testing of their appropriateness for the intended users in the intended contexts, despite widespread recognition among practitioners, funders and researchers that such an approach is prone to significant efficiency and sustainability risks.

'Build it and they will come' is an established trope for describing a failure to anticipate user needs and realities in software development (Markus and Keil 1994), which has also been applied to software and content in a development context (Hatakka 2009), as well as within the T&A context specifically (McGee 2013). We discuss findings relevant to this trope, first by discussing what shapes success and failure of tool selection and tool driving projects. We continue by discussing uptake failure, which our research participants linked to project and tool selection failure. Finally, we discuss T4TAI strategies for mitigating uptake failure.

2 The process of tool selection

Study of T4TAIs, and new media research in general, has been dominated by a proliferation of case studies (see Fung, Gilman and Shkabatur 2010; Fox 2015; Ahmed et al. 2014; Avila et al. 2010; Gigler and Bailur 2014). These focus primarily either on whether ICT tools add value, or contextual and strategic components result in positive outcomes. However, the process through which ICT tools are chosen and characteristics of 'successful' tool selection processes have not received significant attention (Fox 2015; Gaventa and McGee 2013; Joshi 2013).

Within the broader field of study considering philanthropic and social good initiatives, a handful of case studies explore the influence of specific factors on tool adoption (Merkel et al. 2007; TechSoup Global 2012; Zorn, Flanagin and Shoham 2011) and more consider technological diffusion rates and adoption dynamics within sectors (Kim 2014; Zorn et al. 2011; Hoehling 2013). There is also the 'grey literature': guidance produced by organisations for direct use by other organisations (Kwok 2014; Dederich, Hausman and Maxwell 2006; Denison 2008; Wakefield and Sklair 2011).

There does not appear to be any systematic study of the processes through which T4TAIs select technological tools for their work. The recent Learning Study on the Users in Technology for Transparency and Accountability Initiatives (McGee and Carlitz 2013) suggests that many T4TAIs build their strategies around untested assumptions about tool users; when these assumptions do not hold true in implementation, project impact and sustainability are both affected. Understanding and improving processes of tool selection could solve this problem. Tool selection is the opportune moment for strategic decisions that maximise tool adoption by users.

McGee and Carlitz offer a number of recommendations to improve the design of T4TAI through better understanding of user needs and practices. Their study does not, however, explore the context in which such decisions are made, or the competing factors that influence tool selection. We conducted surveys and interviews with T4TAIs in Kenya and South Africa as a first effort to fill this gap.

3 The study and its methods

An online landscaping survey, disseminated via email, was conducted from December 2014 to January 2015, assessing the characteristics and perceptions of civil society organisations (CSOs) that actively use email and have a web presence. In Kenya, due to low responses, email distribution was supplemented by dissemination through the researchers' own networks. The online survey comprised 15–25 questions exploring: (a) CSO size, organisational structure, professionalisation and thematic focus; (b) how CSOs evaluate their own capacity and their enthusiasm for using technology in programming; and (c) the characteristics of a self-identified project that had a technology component. Responses were received from 247 South African organisations and 40 Kenyan organisations. This information was used to inform segmentation for research on tool selection processes in T4TAIs, and provided a preliminary population from which to draw the sample for the subsequent research.

Between January and April 2015, 38 in-depth, semi-structured interviews were conducted with representatives of 18 South African and 20 Kenyan T4TAIs that had recently selected or were currently selecting a tool for T&A programming. Interviews took the form of open conversations, and interviewees were encouraged to present an organic narrative of tool selection processes, which emphasised those details and factors they felt were most relevant, in order to capture the nuanced dynamics influencing tool selection. Interviewers used a code sheet with 28 key indicators, and asked supplementary questions to collect data on those indicators if the respondent did not refer to them in their narrative unprompted. The indicators covered the respondent's motivations for adopting technology, the processes through which tools were identified, selected and implemented, and their perspectives on the implications of the selection process for the success of the project.

Though the small sample size clearly limits the degree to which our findings can be generalised, we believe that they provide useful
insights into the processes of T4TAI tool selection and that, combined with insights from other literature and our own experience of T4TAI programming, they provide a sound basis for preliminary recommendations.

4 Findings

The research found that less than a quarter of the initiatives described the tool they had chosen as a success. Common problems included the tool not working as expected, low uptake by users, more lengthy development or modification time than anticipated, and struggles with finding or managing technical partners.

Organisations lacked knowledge in key areas: many started with little information on what they needed their tool to do, or on which tool could do what they needed. Very few had detailed knowledge about how tools worked before they chose them; although some had conducted research, it did not focus on tool availability or user needs. When we asked respondents what they would do differently if they ran the project again, one of the most common responses was 'know more about users or tools'. Below, we present our findings specifically in relation to the 'build it and they will come' phenomenon, which we found to be both common and significant in relation to outcomes of tool selection.

4.1 Success and failure in tool selection

To achieve a balanced assessment of whether tool selection and subsequent project implementation were successful, we relied on respondents' own definitions of success, identified during interviews, and on researcher assessments based on these definitions. Respondents commonly described success and failure in terms of achieving project targets or organisational objectives, and many did not clearly distinguish between the success of selection processes and success of projects.

Based on interviewees' self-assessments and researcher assessments, cases were classified as either successful, partially successful, unsuccessful or – if the project was too new to make a judgement – inconclusive. Where our classification differed to that of the respondent, it was usually because it was too early to tell, or there was no evidence of user uptake.

Very few tool selection outcomes were successful – by our analysis, only 3 out of 18 in South Africa, and only 6 out of 20 in Kenya (hereafter we will present this as SA: 3/18, KE: 6/20). Even in cases where it was not possible to determine success (SA: 5/18, KE: 1/20), early evidence offered reasons to be concerned. Excluding such cases, successful tool selection was found in less than a quarter of cases.

The prominence of failed tool selection within the sample reinforces anecdotal evidence and suggestions in the literature that many organisations undertaking T4TAIs lack the capacities and resources to make strong tool selections, and that this has a negative impact on programming outcomes (Merkel et al. 2007; TechSoup Global 2012; Denison 2008; Fox 2015).

The most commonly described indicators of successful tool selection, in order of incidence, were the overall success of the project the tool was part of, number of people using the tool, people using the tool in the way intended, and user feedback. One of the most common explanations of project failure was uptake failure, where the tool's intended users did not adopt or use it in the way, or to the degree, that the project anticipated. Other reasons reported included the chosen tool failing to work as expected or, in cases involving a bespoke tool, that the tool was not completed.

4.2 Uptake failure

Almost half the cases experienced uptake failure (SA: 5/12, KE: 6/12). These included the production of social media reporting systems which did not receive reports, SMS scoring platforms which did not receive SMS messages, mobile data collection tools which were deployed, but which did not meet the needs of enumerators during deployment, and a data portal which did not attract users due to an unsuitable user interface. In another quarter of cases (SA: 3/12, KE: 3/12) the organisation had little or no information regarding tool use. We classified the tool selection processes as unsuccessful in such cases, though this occasionally differed from respondents' own views, as discussed below.

There were only two cases where the tool was not used at all. In one, the interviewee cited the complexity of the task (developing a database query system for a large membership-based advocacy organisation) and the inability to find a suitable technical partner as primary reasons for complete uptake failure. In the other, the costs of deploying the tool were beyond the resources of the organisation.

4.3 Strategies to mitigate uptake failure: user research and trialling

Neither user research – here understood broadly as research conducted by T4TAIs on the people that they hope will use a tool – nor trialling–trying out tools with small groups prior to selection or deployment– were well-represented in our sample. Relatively few organisations conducted any form of research on their intended tool users (SA: 9/18, KE: 6/20) and even fewer tested out tools prior to selecting or adopting them (SA: 5/18, KE: 3/20). Trialling and research were especially rare in cases where targeted users were a broad public, a characteristic also associated with high rates of uptake failure.

Our research offers some evidence that trialling and user research could be effective in preventing uptake failure. In both countries, those organisations that conducted user research were most likely to see their tools adopted. Prior experience of using a tool in a project context was even more strongly correlated with uptake success. Respondents described acquiring such experience through the use of tools in other programmes, or by testing and trialling tools. All but one organisation that trialled their tools succeeded, but of those that did not trial, most failed.

4.4 User research and trialling in a project context

The tool selection narratives provided by respondents both reinforced and complicated this positive correlation between research/trialling and tool uptake. Respondents generally recognised the value of user research, and a lack of knowledge about tools and tool users was a frequently mentioned reason for project failure. Many saw that user research would have improved tool selection and project processes, but felt they didn't have the available human, financial or technical resources for research – or, indeed the time. As one respondent put it: 'This was a fast project, there was no time for research. The whole project was really an experiment.'

Some organisations already had extensive knowledge of and engagement with the communities of users they were targeting, but did not recognise the value of conducting additional, structured research. Less than half the initiatives conducted user research prior to tool selection or deployment (SA: 9/18, KE: 6/20), though many thought, with hindsight, that it would have been beneficial. Lack of general or specific research on tool users was regularly associated with uptake failure.

We found this perspective frequently repeated across the sample, despite dramatic variations in both the time and the resources that were invested in tool selection and implementation, and the degree of complexity of tool selection and implementation processes. This suggests that there is limited understanding about what structured user research is, or what value it can add.

Testing or trialling of tools prior to selection or deployment was rare (SA: 5/18, KE: 3/20), but those who had done it had become very strong advocates for trialling, and viewed it as central to success. As one respondent put it: 'You don't know something is good until you see and try it.' A few projects in the sample went through multiple iterations of both tools and project modalities, and described early failures as important learning experiences, performing much the same function as trialling would have. Aside from these projects, however, there seemed to be little awareness of how structured trialling could save some of the time and money costs implied by project failure and restructuring.

It is also worth distinguishing between projects that purchased or adopted an off-the-shelf (OTS) tool (SA: 7/18, KE: 9/18), and projects that 'built', commissioned or developed bespoke tools (SA: 11/18, KE: 8/16). Use of OTS tools included the use of social media to facilitate public discussions, use of a popular instant messaging application for communication between citizen monitors in different parts of the country, or the use of content management systems to develop and deploy websites.

Among a few of those who built their own tools, trialling occurred after initial builds and prior to deployment (SA: 3/7, KE: 1/8). Few of the OTS tools used were selected on the basis of research or trialling (SA: 3/7, KE: 2/8), and none on the basis of trialling more than one tool.

5 Discussion

Our overall finding that most tool selections were unsuccessful is clearly a matter of concern for our respondents, their donors and other stakeholders, and for other practitioners in the field of technology for transparency and accountability. Equally important is the perspective that uptake and project failure could have been avoided if research and
trialling had been deployed.

We also found that trialling was more strongly correlated with success than research, supporting the view that trialling is a good potential strategy for practitioners.

5.1 What makes trialling particularly useful?

Trialling is an approach widely used in many software innovation processes, and is particularly emphasised in user- or human-centred design approaches (see, for example, ISO 2010). There is a strong practical and economic case for trialling. Practically, it enables assumptions about a tool's ease of use, effectiveness and appropriateness to be tested before deployment, reducing risk of failure, and helps determine whether a tool works for specific groups of users.

Economically, late discovery of problems is usually more expensive to correct than early discovery. Research using the diffusion of innovations (DoI) model has demonstrated that individuals often use trialling as a strategy to offset risks in adoption (Rogers 1995).This research supports the idea that trialling is an effective decision-making strategy because it enables the decision-maker to 'kick the tyres' and see if the tool does what they expect, but also because it enables the decision-maker to discover how the tool works in 'the real world'. This discovery is important because it aids understanding and addresses the usefulness, appropriateness and effectiveness of the tool, particularly when the decision-maker has not clearly articulated to themselves what exactly they expect. Surfacing issues in this way is difficult using other methods.

Respondents reported a number of discoveries about their intended users following a tool's selection that highlight how effective trialling would have been. In one case, the organisation had extensive knowledge of its intended community of users, but it was only after deployment that they discovered that their target users in the area where the project was being deployed did not use their choice of media at all; trialling would have surfaced this issue quickly. Another respondent reported that it was only when the developer and the organisation deploying the technology went to the deployment location together that the developer realised that the tool would need to store data offline until mobile networks became available. This provides an example of a trial strategy surfacing something that the deploying organisation had not anticipated would be critical to the tool design and selection.

It could be argued that since many T4TAIs are described as pilots, that this is a form of trialling. We would argue though that these pilots do not qualify as trials in themselves because, as we have reported above, there is little systematic gathering of feedback from users to identify how – if the 'pilot' was to be the basis of further intervention – a tool should be modified or an alternative found.

5.2 Why do organisations trial so rarely?

In many cases, the organisations we interviewed did not choose a tool at all. Sometimes, a tool had already been selected by donors or foreign partners before our respondent became involved in the project. More often, they sought technical partners to work with at an early stage, and these partners took on all or most of the responsibility for identifying or building tools. As one respondent explained, their chosen partner dictated the choice of tool: 'To be honest – in terms of technology – we weren't really choosing at all.'2 Time and resource constraints were also frequently mentioned. Many respondents reported that projects involving technologies had taken much longer than they had expected, hoped or planned for. Trialling takes time, and as launch dates approach, initiatives may choose to skip this step to save time, even though they recognise its value. In some cases, trialling was simply not possible. One respondent explained that they had to order the equipment they needed online, and could not try it out prior to purchase.

It is also worth noting that the degree of resources, time and effort invested varied greatly among the research participants. Some projects were completed in a matter of weeks, with limited resources and little formal planning, while other initiatives involved substantial budgets, the hiring of additional dedicated staff and multi-year plans. With two exceptions, the cases which involved building tools from scratch were those which required the most substantial resources. Overall, there was not a clear relationship between resources deployed and success.

For some organisations, such constraints make trialling challenging, even sometimes unfeasible. However, we suggest two broader explanations relating to the organisations implementing T4TAIs that could help to account for the lack of trialling and user research. These speak less to constraints, but rather to deeper questions of how organisations approach tool selection, and their understanding of the relationship between technology choices and user engagement.

5.3 Proxy errors and unknown unknowns

One explanation may be that T4TAI project managers regard themselves as reasonable proxies for their users. Rather than go into the field to anticipate user needs, they looked in the mirror. A number of respondents reported that though they did not conduct trials with potential users, they did try out the tool themselves.

We believe this is likely to be problematic for T&A work, and perhaps particularly for tool designers and project managers in developing country contexts. A software developer at an elite university near Boston who aimed to develop a social platform for American students may have been able to use themselves as a test case with great success, and might have had a lot in common with their intended users. A manager in an organisation based in an African capital city aiming to improve citizens' ability to hold their local government to account in a rural area may have much less in common with their intended community of users.

Though our research did not include interviews with project managers based in developed countries that were selecting and developing tools for use in developing countries, it would be reasonable to assume that they may be even further removed.

In our research sample, there were many dimensions to this lack of commonality, from the relatively obvious questions of class, education and access to power and technology, to less obvious factors such as daily routines, cultures and attitudes. In South Africa, with its history of apartheid and very high Gini coefficient, these differences may be
particularly acute.

We suggest that a related explanation may be that some managers and organisations may suffer from a problem of 'unknown unknowns'. We noticed that those organisations that conducted research and trialling were often those that already had quite extensive knowledge of how their targeted users currently use technologies. It could be that those who had less knowledge of their intended users also had less understanding of the importance of this knowledge gap. Conducting trialling or user research does not require much technical knowledge or skill, but the advantages of doing so may not be immediately obvious.

Project managers tasked with selecting and implementing tools may not be able to realistically forecast the costs of research or trialling when buying, adapting or building a tool – or the costs of uptake failure.

The decision of whether to buy or build tools was critical for the participants in our research. Few respondents identified their organisations as 'tech', or described them as 'innovative' in their use of technology – and most stated that they had very limited technical knowledge or skills. But most organisations also chose to build or commission the development of bespoke tools, rather than buying or adopting OTS tools. More surprising is that few of those who opted for 'build' over 'buy' conducted research on or trialled available existing tools before undertaking the challenging and complex task of creating a new tool.

On the face of it, this is unexpected. Developing new digital tools is a risky endeavour, even if tool selection and implementation involves little investment of time and resources, and even in well-resourced organisations (Tidd, Bessant and Pavitt 2005). A common approach to managing this risk is iterative and 'adaptive design' (Highsmith 2013), which involves budgeting and planning for an iterative cycle of versions that will succeed over time through testing. We encountered only one case of this approach in our research.

A number of respondents clearly recognised the value of iterative development. They had clear and detailed views on the shortcomings of the tools they had built or commissioned, but lacked the capacity, authority, financial resources or time to invest in further development.

6 Conclusion

We believe that the sample size in this study is large enough to represent a substantial portion of T4TAIs present in Kenya and South Africa. The samples included a diverse range of organisations – including both long-standing T&A organisations and tech-focused innovators – and the initiatives described by respondents cover a broad range of technologies. This research was in any case designed to bring issues and insights to the surface, rather than to test firm hypotheses. Our conclusions are therefore tentative. Further research exploring these organisations in greater depth, or applying comparable methods in other countries, would be useful to confirm or question our conclusions, and to produce additional insights.

In regard to the 'build it and they will come' phenomenon, our research supports earlier research findings. Avila et al. (2010), McGee and Carlitz (2013) and others have highlighted the need for better understanding of users if tools are to be used appropriately and successfully in T&A projects. We also note, however, the relevance of trialling strategies to mitigate the risk of uptake failure, and offer two common explanations for why T4TAIs fail to learn about users before selecting and deploying technological tools. Proxy errors, in which project managers or teams assume that they are themselves reasonable proxies for the target users of T4TAI tools, and lack of knowledge about the risks and costs associated with not understanding users, are especially common. These explanations highlight entry points for supporting more strategic tool selection and implementation by T4TAIs.

McGee and Carlitz recommend that 'in both design and implementation phases, actors involved in T4TAIs need to gather more information about potential and actual users' (2013: 30). Our research supports this recommendation and also suggests the need for a further focus within the T4TAI community (both researchers and practitioners) on the user,
in particular on trialling. We suggest two basic approaches that could be tested by practitioners in the field and evaluated by researchers.

1 Test first. Trialling during project planning helps understand the limitations of tools in context, and identify obstacles to user uptake. It takes a variety of forms. A commitment to documenting trialling methods could lead to shared learning across initiatives and organisations to develop best practice.

2 Find or buy before building. A systematic focus on identifying and trialling existing OTS technologies before building or commissioning the development of bespoke tools could lead to less tool selection failures and better use of limited resources. The risks of failure may be lower for initiatives employing OTS tools, and the costs of failure for strategies that purchase or adopt OTS tools are much lower than the costs of failure for bespoke tool development. Identifying and testing available OTS may require some kind of research, but our findings suggest that reaching out to existing networks or conducting simple web searches could be sufficient to identify potential OTS tools for many T4TAIs.

Together, these approaches suggest a strategy of increasing investment and effort in tool selection, in order to conserve overall project resources and minimise the risk of failure. According to such a strategy, T4TAIs should investigate and test tools before adopting them, and attempt to adopt OTS tools before developing bespoke tools. Such an approach also implies a handful of simple rules of thumb that T4TAIs can apply
to strengthen tool selection processes and project impact.

Lastly, lack of awareness among respondents regarding appropriate tool selection strategies and resources suggests a communication problem between T4TAI researchers and practitioners. As McGee and Carlitz (2013) point out, though technology for transparency and accountability is a relatively new field or sub-field, evidence suggests that existing research is having insufficient impact on practice. Collaborative efforts such as the Transparency and Accountability Initiative, Research4Development, Making All Voices Count and the GovLab have taken preliminary steps to address this gap between research and practice in guides and online resources.3 Such efforts should be supported and critically reviewed to determine their effectiveness in bridging this gap. More focused specific efforts (such as the Framework for Tool Selection being developed from the research reported here) should also be evaluated.

Our research also suggests that local networks may have a profound influence on tool selection practices, but that in Kenya and South Africa at least, they are not as well developed as some might suspect, both in terms of capacities and connectedness. Donors, practitioners and researchers all have different roles to play in supporting the development of such networks, which can have an immediate impact on the resources available to T4TAIs for tool selection processes.

More directly, our research has confirmed the importance of user research for successful tool selection processes and suggested that trialling strategies can be especially important. We have also suggested a handful of heuristics that T4TAIs can implement during the tool selection process, and which merit careful assessment by researchers and evaluators. We believe that this can make a significant contribution to systematic learning around failure and success of tools in the service of T&A programming.

Notes

* The research on which this article is based was funded by the Research, Evidence and Learning Component of Making All Voices Count.

1. The research project was conducted by the authors with Sasha Kinney and Tom Walker.
2. Cape Town, South Africa, April 2015.
3. See www.transparency-initiative.org/, http://r4d.dfid.gov.uk/, www.makingallvoicescount.org/ and http://thegovlab.org/, respectively.

References

Ahmed, A.; Scheepers, H. and Stockdale, R. (2014) 'Social Media Research: A Review of Academic Research and Future Research Directions', Pacific Asia Journal of the Association for Information Systems 6.1–3: 21–37

Avila, R.; Feigenblatt, H.; Heacock, R. and Heller, N. (2010) Global Mapping of Technology for Transparency and Accountability, London: Open Society Foundation

Dederich, L.; Hausman, T. and Maxwell, S. (2006) Online Technology for Social Change: From Struggle to Strategy, https://ict4peace.wordpress.com/2006/10/17/online-technology-for-social-change-from­struggle-to-strategy/ (accessed 8 October 2015)

Denison, T. (2008) 'Barriers to the Effective Use of Web Technologies by Community Sector Organisations', in CCNR (2008), 5th Prato Community Infor matics and Development Infor matics Conference 2008: ICTs for Social Inclusion, http://ccnr.infotech.monash.edu/assets/docs/ prato2008papers/tomdenison.pdf (accessed 8 October 2015)

Fox, J. (2015) 'Social Accountability: What Does the Evidence Really Say?', World Development 72: 346–61

Fung, A.; Gilman, H.R. and Shkabatur, J. (2010) Impact Case Studies from Middle Income and Developing Countries: New Technologies, London: Transparency and Accountability Initiative

Gaventa, J. and McGee, R. (2013) 'The Impact of Transparency and Accountability Initiatives', Development Policy Review 31: s3–28

Gigler, B.S. and Bailur, S. (2014) Closing the Feedback Loop: Can Technology Bridge the Accountability Gap?, Directions in Development, Washington DC: World Bank

Hatakka, M. (2009) 'Build It and They Will Come?: Inhibiting Factors for Reuse of Open Content in Developing Countries', Electronic Journal on Infor mation Systems in Developing Countries 37.5: 1–16

Highsmith, J.A. (2013) Adaptive Software Development, London: Addison-Wesley

Hoehling, A. (2013) The 7th Annual Nonprofit Technology Staffing and Investments Report, Portland OR: Non-Profit Technology Network

ISO (2010) Standard ISO 9241-210:2010. Ergonomics of Human-system Interaction – Part 210: Human-centred Design for Interactive Systems, Geneva: International Standards Organisation

Joshi, A. (2013) 'Context Matters: A Causal Chain Approach to Unpacking Social Accountability Interventions', Work in Progress Paper, Brighton: IDS

Kim, S.Y. (2014) 'Democratizing Mobile Technology in Support of Volunteer Activities in Data Collection', unpublished PhD dissertation, Carnegie Mellon University, School of Computer Science

Kwok, R. (2014) Going Digital: Five Lessons for Charities Developing Technology-based Innovations, London: Nesta Impact Investments, www.nesta.org.uk/sites/default/files/going_digital.pdf (accessed 8 October 2015)

Markus, L.M. and Keil, M. (1994) 'If We Build It, They Will Come: Designing Information Systems That People Want to Use', Sloan Management Review 35.4: 11–25

McGee, R. (2013) 'Aid Transparency and Accountability: "Build It and They'll Come"?' Development Policy Review 31: 107–24

McGee, R. and Carlitz, R. (2013) Learning Study on the Users in Technology for Transparency and Accountability Initiatives: Assumptions and Realities, Brighton: IDS

Merkel, C.; Farooq, U.; Xiao, L.; Ganoe, C.; Rosson, M.B. and Carroll, J.M. (2007) 'Managing Technology Use and Learning in Nonprofit Community Organizations', Proceedings of the 2007 Symposium on Computer Human Interaction for the Management of Information Technology, New York NY: Association for Computing Machinery, http://portal.acm.org/citation.cfm?doid=1234772.1234783 (accessed 8 October 2015)

Rogers, E. (1995) Diffusion of Innovations, 4th ed., New York NY: Free Press

Slater, D. (2014) Fundamentals for Using Technology in Transparency and Accountability Organisations, London: Transparency and Accountability Initiative

TechSoup Global (2012) 2012 Global Cloud Computing Survey Results, www.techsoupglobal.org/2012-global-cloud-computing-survey (accessed 8 October 2015)

Tidd, J.; Bessant, J. and Pavitt, K. (2005) Managing Innovation: Integrating Technological, Market and Organisational Change, 3rd ed., New York NY: John Wiley

Wakefield, D. and Sklair, A. (2011) Philanthropy and Social Media, London: The Institute for Philanthropy

Zorn, T.E.; Flanagin, A.J. and Shoham, M.D. (2011) 'Institutional and Noninstitutional Influences on Information and Communication Technology Adoption and Use among Nonprofit Organizations', Human Communication Research 37.1: 1–33

Copyright Information

CCBY

© 2016 The Authors. IDS Bulletin © Institute of Development Studies

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non Commercial 4.0 International licence, which permits downloading and sharing provided the original authors and source are credited – but the work is not used for commercial purposes. http://creativecommons.org/licenses/by-nc/4.0/legalcode

The IDS Bulletin is published by Institute of Development Studies, Library Road, Brighton BN1 9RE, UK

This article is part of IDS Bulletin Vol. 47 No. 1 January 2016: 'Opening Governance'. 113–126; the Introduction is also recommended reading.