When Does ICT-Enabled Citizen Voice Lead to Government Responsiveness?

Tiago Peixoto and Jonathan Fox*

Abstract

This article reviews evidence on the use of 23 information and communications technology (ICT) platforms to project citizen voice to improve public service delivery. This meta-analysis focuses on empirical studies of initiatives in the global South, highlighting both citizen uptake ('yelp') and the degree to which public service providers respond to expressions of citizen voice ('teeth'). The conceptual framework distinguishes two roles played by ICT-enabled citizen voice: informing upwards accountability, and bolstering downwards accountability through either individual user feedback or collective civic action. This distinction between the ways in which ICT platforms mediate the relationship between citizens and service providers allows for a precise analytical focus on how different dimensions of such platforms contribute to public sector responsiveness. These cases suggest that while ICT platforms have been relevant in increasing policymakers' and senior managers' capacity to respond, most of them have yet to influence their willingness to do so.

1 Introduction

Around the world, civil society organisations (CSOs) and governments are experimenting with information and communications technology (ICT) platforms that try to encourage and project citizen voice, with the goal of improving public service delivery. This meta-analysis focuses on empirical studies of initiatives in the global South, highlighting both citizen uptake ('yelp') and the degree to which public service providers respond to expressions of citizen voice ('teeth'). The conceptual framework is informed by the key distinction between two distinct genres of ICT-enabled citizen voice – aggregated individual assessments of service provision and collective civic action. The first approach constitutes user feedback, providing precise information in real time to decision-makers. This allows policymakers and programme managers to identify and address service delivery problems – but at their discretion. Collective civic action, in contrast, can encourage service providers to become more publicly accountable – an approach that depends less exclusively on decision-makers' discretion about whether or not to act on the information embodied in feedback. This conceptual distinction between two different ways in which ICT platforms mediate the citizen– service provider relationship allows for a more precise analytical focus on how different dimensions of these ICT platforms contribute to public sector responsiveness.

This study begins with a conceptual framework intended to clarify the different links in the causal chain in between ICT-enabled opportunities to express voice (platforms) and institutional responses. In other words, how and why are these platforms supposed to leverage responses from service providers? The answers turn out not to be so obvious. Our approach was informed by a close review of the available evidence, primarily quantitative, about experiences with 23 ICT platforms in 17 countries.1 This focus on unpacking causal chains is informed by two factors. First, the broader literature on the drivers of accountability increasingly emphasises using causal chains to address the analytical puzzle of how to distinguish how and why citizen action may or may not lead to public sector response (Fox 2014; Grandvoinnet, Aslam and Raha 2015; Joshi 2014; Peixoto 2013). Second, analysis revealed that we do not see a generic type of platform leading to a generic type of response. Instead, we see key differences in the institutional (not technological) design of the interface that may be relevant for voice, citizen action and institutional response. The evidence so far indicates that most of the ICT platforms that manage to leverage responsiveness somehow directly involve government.

While ICT-enabled voice platforms vary widely across many dimensions, this analysis emphasises several differences that are hypothesised to influence both citizen uptake and institutional response. These include the degree of public access to information about the expression of voice – does the public see what the public says? Does the ICT platform document and disclose how the public sector responds? They also include institutional mechanisms for public sector response – do the agencies or organisations take specific offline actions to prompt service providers' response? As a first step towards homing in on these variables, this article maps the 23 platforms studied in terms of various empirical indicators of these distinct dynamics. This exercise is followed by a discussion of propositions that may or may not link voice to institutional response.

Note that this study does not focus on two ways in which service delivery agencies use ICT that are very relevant for understanding their full array of relationships with users. First, many public agencies are using mobile phones and social media to disseminate information efficiently. However, if those interfaces are one-way ('inside-out', or 'top-down'), then they do not 'count' as ICT-enabled citizen voice for the purposes of this study. Second, agencies can use ICT for internal administrative reforms that can bolster their capacity to respond to citizen concerns – by reducing the discretionary power of front-line providers through increasing the capacity of managers to monitor service provider performance, as well as by helping consistently track whether and how problems are being addressed. This study covers evidence of institutional response to ICT-enabled systems for users to exercise voice, rather than the broader set of cases of relevant e-government initiatives.

2 The conceptual map: unpacking digital engagement

The broader analytical context for this article involves three simultaneous trends in the literature on the role of information in leveraging public accountability. First, the number and diversity of practitioner-led digital engagement for service delivery initiatives continues to grow, involving both effervescent experimentation and efforts to scale up. Experimentation with social accountability tools has been growing within the portfolios of both large public and private aid donors for the past decade, and some involve ICT. For instance, many World Bank projects with 'identifiable beneficiaries' now include some kind of feedback mechanism, and citizen engagement has become a policy framework which includes the use of ICT (World Bank 2014a). Major private donors, such as the Omidyar Network and Google, are also making significant investments to encourage 'civic technology' – in both the global North and South. New donor partnerships are also encouraging experimentation with civic technology in very low-income countries, led most notably by Making All Voices Count.2

Second, while growing media coverage of ICT-enabled voice platforms is often enthusiastic, social science research on the dynamics and impacts of these initiatives lags far behind, and the limited existing evidence does not yet support unqualified optimism.3 This study is distinctive in that it draws on a recent round of unusually comprehensive empirical studies that involve both large-scale surveys and access to government agency data. This new research suggests that the key dynamics that drive both voice and institutional response may be different from some of the widely held impressions projected by the media, donors and platform developers. Take, for example, the case of the Kenyan urban water agency's MajiVoice (see also Welle, Williams and Pearce, this IDS Bulletin), a large-scale user-feedback system widely presented as an ICT-enabled voice platform. Recent surveys find significant evidence of institutional response, grounded in an effective complaint tracking system – yet three quarters of the complaints are filed in person, 21 per cent by phone and less than 3 per cent by Short Message Service (SMS) or online (Belcher and Abreu-Lopes 2016, forthcoming).

Third, the focus on the potential for citizen voice to improve public service delivery involves at least four distinct yet overlapping arenas of practice – the open data movement, open government reforms, anti-corruption efforts and social accountability initiatives. In spite of the apparent new policy consensus that all these good things go together, in practice, the limited synergy between these distinct approaches suggests that the whole is still not greater than the sum of the parts (Carothers and Brechenmacher 2014). Most of these governance reform approaches rely heavily on the potential power of information to stimulate voice, yet they assign information different roles. There are several conceptual challenges involved in specifying the causal mechanisms that may link voice and institutional response – aside from the empirical questions involved (documenting uptake is more straightforward than institutional response). The first analytical challenge is to disentangle voice from responsiveness. Much of the first wave of research on ICT-enabled voice platforms focuses primarily on citizen uptake (e.g. Gigler and Bailur 2014), without clear evidence that the feedback loop actually closes. In practice, the concept of the feedback loop is often used to imply that uptake (e.g. citizen usage of crowd-sourced platforms to report feedback) necessarily leads to positive institutional responses. In other words, there is a high degree of optimism embedded in the way the concept tends to be used. In contrast, the framework proposed here avoids this assumption by treating the degree of institutional response as an open question.

The second conceptual challenge is to specify the relationship between the role of ICT-enabled voice platforms and the broader question of the relationship between transparency and accountability. In spite of the widely held view that 'sunshine is the best disinfectant', the empirical literature on the relationship between transparency and accountability is far from clear (Fox 2007; Gaventa and McGee 2013; Peixoto 2013). The assumed causal mechanism is that transparency will inform and stimulate collective action, which in turn will provoke an appropriate institutional response (Brockmyer and Fox 2015; Fox 2014).4 In this model, both analysts and practitioners have only just begun to spell out the process behind that collective action (Fung, Graham and Weil 2007; Joshi 2014; Lieberman, Posner and Tsai 2014). In light of widely held unrealistic expectations about the 'power of sunshine', convincing propositions about the causal mechanisms involved need to specify how and why the availability of an ICT platform (1) would motivate citizen action and (2) why the resulting user feedback would motivate improvements in service provision. After all, decision-makers' lack of information about problems is not the only cause of low-quality service provision.

Third, the relationship between ICT-enabled voice platforms and the transparency/accountability question is complicated by the fact that, in practice, a significant subset of those platforms does not publicly disclose the user feedback. Yet if citizen voice is not made visible to other citizens, where does its leverage come from? Such feedback systems aggregate data – by asking citizens to share their assessments of service provision – but if the resulting information is not made public, then it cannot inform citizen action. In these systems, if users' input is going to influence service provision, voice must activate 'teeth' through a process other than public transparency – such as the use of data dashboards that inform senior managers' discretionary application of administrative discipline.

These conceptual propositions suggest that it is relevant to distinguish explicitly between two different accountability pathways that link voice and 'teeth' – shorthand for institutional willingness and capacity to respond (Fox 2014). In downwards accountability relationships, service uk providers are held accountable by citizen voice and action. The arrow of answerability points downwards, insofar as it is driven by the potential political cost to policymakers of not responding to a publicly visible concern. In contrast, in upwards accountability relationships, frontline and middle-level service providers are held accountable to senior policymakers and programme managers, who use the user information to take administrative action. The arrow of answerability points upwards. In this approach, the incentives for policymakers to act on user information are less clear. Clearly, both mechanisms can operate together, but they are empirically and analytically distinct (see Table 1).

Based on these conceptual propositions, this review of 23 ICT-enabled voice platforms distinguishes between two different types of citizen voice, 'user feedback' and 'civic action'. While these two approaches can overlap in practice, they are analytically distinct. Their common denominator is the use of dedicated ICT platforms to solicit and collect feedback on public service delivery. The differences between them involve three dimensions: (1) whether the feedback provided is disclosed; (2) through which pathway individual or collective citizens' preferences and views are expressed; and (3) whether these mechanisms tend to promote downwards or upwards accountability. Note that this analytical approach differs from the World Bank's current policy framework, which considers user feedback to be a variant of 'citizen engagement' (World Bank 2014a). The approach proposed here, in contrast, does not treat the adjectives 'citizen' and 'civic' as pure synonyms (though they overlap). We use citizen (as in 'citizen voice') to refer to individual, non-public actions, while civic refers to public, collective actions.5 The two approaches are potentially mutually reinforcing and in practice, some voice platforms combine them (see Figure 1).

Table 1

With regards to the first dimension, we will assess cases in terms of the extent to which the feedback provided by individuals is publcily disclosed or not, thus enabling citizens to potentially act to hold governments accountable. Citizens' capacity to hold governments accountable depends, among other things, on the accessibility of publicly available relevant and actionable information (Fung, Graham and Weil 2007). In this respect, whether the feedback provided by citizens on service delivery is publicised or not is directly related to the extent to which citizens can hold governments accountable for their performance and actions. Thus, a first distinction between user feedback and civic engagement is that, while a growing number of ICT platforms collect input from individuals, only user feedback that is made public counts here as civic engagement (in Figure 1, this is the area of overlap between the two circles, involving both individual feedback and public disclosure).

Figure 1

For instance, in the case of the Punjab Proactive Governance model, the government solicits feedback via mobile phones on the quality of services provided on a large scale, on an ongoing basis (Bhatti, Zall Kusek and Verheijen 2015). However, the feedback provided is not disclosed to the public, only to senior policymakers, as it is intended to inform internal administrative monitoring processes. This process does not contribute to citizens' ability to act based on the feedback. In contrast, Uruguay's Por Mi Barrio is a mobile and web-based platform that enables Montevideo's citizens to report problems like vandalism and breakdowns of public infrastructure. The problems reported, and the actions taken in response by government (e.g. repaired, or not), are displayed on a map on the public website. Not only is the government able to act on citizen reports, the publication of the feedback makes it possible for citizens to hold governments accountable.

The second dimension that we use to categorise platforms assesses the mechanisms by which citizens' views and preferences are expressed – either individually or collectively. Individualised mechanisms refer to those that do not involve collective action, yet the feedback provided by a single individual is expected to trigger a response, possibly through aggregation in order to identify problem areas in public service delivery.

Box 1

This is the case, for instance, of web-based citizen reporting initiatives such as Por Mi Barrio, FixMyStreet in Georgia and I Paid a Bribe in India. In these cases, each individual report of very specific service issues needing attention is assumed to be enough to lead to a governmental response. In contrast, collective mechanisms refer to those in which it is the magnitude, nature and intensity of the aggregation of citizen concerns that is expected to trigger governmental action. Examples of platforms for collective voice include online petitions such as Change. org and mobile and web-voting in Brazil's state-wide Rio Grande do Sul participatory budgeting (PB) process. In both initiatives, it is the collective mobilisation around a cause or preference that is intended to trigger government responsiveness. The core of the technological platforms that support these mechanisms lies in the reduction of transaction costs for collective action that can address policy agenda-setting, in contrast to reacting to policy outputs. This collective dimension, we argue, is what gives the character of 'civic-ness' to ICT-enabled voice platforms, insofar as they enable individuals to engage in collective action – or at least to address public concerns. In contrast to feedback systems that receive individual reactions to specific service delivery problems, ICT platforms that enable the public aggregation of citizens' views have more potential to constitute input into the setting of broader policy priorities. This potential civic agenda-setting contribution goes beyond the conventional understanding of feedback, in which the agendas that citizens are supposed to respond to are set from above (see Box 1).

Thus, our conceptual distinction can be summarised as follows: citizen feedback initiatives provide feedback from individual clients of services. Where such feedback is not publicly disclosed, the causal pathway to governmental response is via upwards accountability, from front-line and mid-level public servants to senior managers and policymakers. Conversely, civic engagement refers to mechanisms where the feedback is publicly disclosed, which allows for collective action and downwards accountability to also take place. Figure 1 illustrates our conceptual model.

On the left side of Figure 1 (light grey) feedback is individual and undisclosed, which we can describe as a typical case of governmental user-feedback platforms. On the right side (dark grey), citizen voice is simultaneously collective and disclosed, meeting the two criteria for our definition of civic engagement. At the intersection point, however, we find platforms that both collect individually specific feedback and make those inputs public (sometimes also reporting whether and how the government responds). This overlap involves the fact that, while individualised feedback mechanisms are not designed to spur online collective action within the platform itself, the fact that the feedback is publicised may inform and facilitate collective action – offline as well as online. This may be the case, for instance, when the sum of individual feedback in a certain platform, such as FixMyStreet, reveals to the public the patterns of failure in a certain service, or in certain locations. In this case, even though the platform is not specifically designed to support collective action, the disclosure of evidence of patterns of failure in a given service may support well-targeted collective action to address service delivery problems.

Figure 2 presents the diagram populated with the cases we analyse in this study. The platforms that generated a high degree of tangible response from the service delivery agencies are indicated in black (7 of 23). High responsiveness to citizen voice is measured here as tangible service delivery agency action, registered in more than half of cases. In eight cases, user uptake was high – though only three of these were also among the eight cases of high responsiveness.

Figure 2

As shown in Figure 2, approximately a quarter of the cases are found in the user-feedback category, another quarter in the civic action category, and 14 of 23 at the intersection between those two, called citizen engagement here. The cases in the user-feedback category are mostly web- and mobile-based systems for collecting citizen views on the provision of services in a specific sector, such as electricity, water and health. Here the service provider plays either a passive or an active role in the collection of feedback. In the first role, the citizen voluntarily initiates the contact to report an issue with public services via mobile- or web-based systems – sometimes in combination with offline, face-to-face citizen attention windows (as in the case of MajiVoice in Kenya). One large-scale example in this category is Lapor, Indonesia's complaints handling system, which allows citizens to submit their reports on issues ranging from teacher absenteeism to damaged roads through a number of channels which include SMS, mobile apps and social media.

The user-feedback category also includes a second mechanism by which data is collected, which we call 'proactive listening' – also called 'proactive feedback' by its practitioners (Bhatti, Zall Kusek and Verheijen 2015; Masud 2015). Here, government service providers proactively reach out to citizens in order to gather feedback from them on the quality of services received. This mechanism is best illustrated by Punjab's Citizen Feedback Model, where a system generates SMS and calls to public service users in order to ask them about satisfaction with the services received and potential corruption incidents. The Punjab government has deployed this approach on an unprecedentedly massive scale, with more than 6 million outreach calls so far. Recent large-scale surveys of service users have found that these outreach efforts actually reached and received responses from 15 per cent of citizens called (Bayern 2015; World Bank 2015). EDE Este, an electricity distribution company in the Dominican Republic, also does large-scale, proactive surveys of its service users. The initiative combines a traditional complaints handling mechanism with proactive outreach to users. This online/mobile phone platform allows citizens to report problems with electricity services, ranging from malfunctioning connections to bribe requests by maintenance crews. Following the handling of the complaint (e.g. re-connection of electricity), the company proactively re-contacts a random sample of users to gather feedback on the quality of services provided. The feedback received is systematically used to inform sanctions (e.g. administrative procedures) and rewards (e.g. performance-related wage bonuses for company workers). Since its implementation in 2011, the initiative has recorded growing resolution rates of reported issues, with close to 100 per cent of the feedback provided indicating good or excellent levels of satisfaction.6 The average of instances of disrespectful treatment of clients registered at the beginning of the project was drastically reduced, and reported cases of corruption fell by 70 per cent.

The majority of platforms make their citizen feedback public (18 of 23). Out of the five that do not disclose the feedback, two are governmental and three involve donor agencies in collaboration with governments. Conversely, all of the CSO-driven initiatives publicise the input given by citizens. This finding makes particular sense if one considers the directionality of accountability relations. User-feedback initiatives (i.e. not disclosed) are more likely to be implemented by governments or donors, where service providers are held accountable to a higher authority (upwards accountability). Conversely, given that CSOs have few means to hold providers directly accountable, they rely essentially on downwards accountability mechanisms, where the driving force of institutional responsiveness – at least hypothetically – is the exposure of the behaviour of service providers vis-à-vis citizens. No pattern seems to emerge when looking at disclosure of feedback and institutional responsiveness, however. In user-feedback initiatives (where feedback is not disclosed and there is no collective action), the four cases are equally split between low and high levels of institutional responsiveness. A similar pattern emerges when examining citizen engagement initiatives: public disclosure of feedback does not seem to lead – per se – to increased responsiveness from providers.

In 14 cases, the provision of input through the dedicated platform is complemented by some type of offline action to prompt governments to respond and/or to monitor government responsiveness. This is the case, for instance, of the Rio Grande do Sul PB process, in which citizens are periodically elected to monitor the implementation of investments prioritised through a voting process (Spada et al. 2015). In MajiVoice, the responsiveness of the water service agency is actively monitored by the members of the Water Services Regulatory Board, which can trigger legal actions against service providers when they fail to meet pre-established quality standards (Belcher and Abreu-Lopes 2016, forthcoming). Yet, offline action does not seem to ensure responsiveness by itself, as illustrated by the cases of
e-Chautari in Nepal and Barrios Digital in Bolivia. However, among the 14 cases, the evidence is insufficient to verify that the intensity and regularity of these offline actions varies.

In the category of civic action initiatives, where response involves online collective action, we find four different cases, with varying degrees of institutional responsiveness. The Rio Grande do Sul Digital PB process has a high level of institutional responsiveness, while the online petition platform Change.org and the Brazilian initiative Pressure Pan both have medium levels. A possible explanation of the different responsiveness levels is the difference in institutional design. Digital PB in Rio Grande do Sul is a governmental initiative mandated by state legislation. As such, all of the citizen-generated social investment proposals that are approved through the participatory process are officially included in the state's budget, with a number of them effectively carried out by the state government.7 The other two initiatives are platforms that allow any citizen to initiate collective action to petition or exert pressure on the government to take an action towards any public agenda. This open-endedness means that the platforms can launch both some actions that trigger extensive uptake and mobilisations, and many that fail to generate follow-up. This potential for a large denominator, in terms of the total number of initiatives, would affect the overall percentage of petitions that trigger responsiveness. Indeed, some data seems to suggest the importance of mobilisation capacity: online petitions on Change.org are substantively more likely to be successful when sponsored by an organisation (World Bank 2015), and citizen campaigns through Pressure Pan are three times more likely to succeed when receiving mobilisation support from Pressure Pan's staff. This evidence resonates with the proposition that the effectiveness of digital technologies in social mobilisation depends on offline structures of organisation and influence (Fung, Gilman and Shkabatur 2013). Finally, we find the widely recognised case of U-Report (UR) in Uganda, with low level of institutional responsiveness, which we shall discuss later.

In terms of the institutional actors that drive the voice initiatives, 12 are led by CSOs, six by governments, and five by donors. Out of the seven initiatives with high levels of responsiveness, four are government-led and three CSO-led. Civil society and governments seem equally capable of creating platforms and processes that engender responsiveness. However, the three CSO high-response initiatives all share a common trait in that they involve partnerships with government. In other words, in all of the cases of high institutional responsiveness, the government is either leading the process or plays the role of a partner. However, not all of the initiatives involving government–CSO partnerships led to high levels of institutional responsiveness, as illustrated by the cases of I Paid a Bribe and Check My School, both of which had low percentages of issues raised by citizens that led to documented agency responses. Seen together, these findings seem to suggest that while partnership with government is not a sufficient condition for the responsiveness of CSO-led initiatives, it may well be an enabling one. Finally, while the initiatives showing medium and high degrees of institutional responsiveness involve both CSO and government-driven efforts, we find no donor-driven platforms that led to institutional responsiveness. While we do not claim our sample to be representative and the results may be skewed due to the small number of donor-driven cases analysed, these patterns suggest future research paths, focusing on the role that different drivers may play in institutional responsiveness.

When examining uptake, citizen use of platforms (an output) should not be equated with institutional responsiveness (an outcome). This sample includes significant cases that combined high uptake with low responsiveness. The case of UR, UNICEF's social monitoring system for young Ugandans, provides compelling evidence for this point. Created in 2007, this SMS-based platform runs weekly polls with registered users on a broad range of issues (e.g. child marriage, access to education). To inform public debate, the results of the polls are widely disseminated through the project's website and diverse mass media outlets, including a variety of formats such as newspaper articles, radio shows and even a documentary broadcast on major Ugandan TV channels. Members of Parliament (MPs) are UR's main policy audience. Aligned with a vision of real-time data collection to inform policymaking that goes beyond sending weekly newsletters with poll results to MPs, UNICEF also provides MPs with access to the platform to reach out to their audiences. The number of registered users (U-Reporters) has grown steadily since its launch, recently reaching more than 299,000 (Bayern 2015; World Bank 2015). UNICEF describes UR as a '"killer app" for communication towards achieving equitable outcomes for children and their families' (UNICEF 2012). This enthusiastic view of UR has resonated in development circles, with the free SMS-based platform currently being rolled out in countries such as Rwanda, Burundi, the Democratic Republic of Congo, South Sudan, Nigeria and Mexico.

Uptake is not a problem for UR in terms of numbers, and it leverages the potential of mobile phones as a means to 'listen at scale'. However, 47 per cent of UR participants have some university education and one quarter are government employees, raising questions about whose voices are being projected (see Box 1). Furthermore, until recently very little was known about the extent to which UR's take-up was translated into any type of institutional responsiveness. A new detailed evaluation of UR finds no systematic evidence of UR affecting policy, let alone MPs' behaviour in terms of representation, legislation and oversight (Berdou and Abreu-Lopes 2015). UR emerges thus as a significant case that illustrates the need to separate uptake (as an output), from institutional responsiveness (as an outcome).

To conclude the discussion of these empirical findings, one of the most noticeable patterns is the existence of numerous digital engagement initiatives that meet dead ends despite different pathways – at least in the relatively short run. The majority of the 23 cases studied led to low levels of institutional responsiveness, with 11 reporting medium to high levels (defined conservatively as leading to at least 20 per cent response rates). Notably, the multiple dead ends do not seem to be motivated by the absence of any one specific factor. None of these variables appear to be a sufficient condition for institutional responsiveness, suggesting that none of these factors can be considered as a 'magic bullet'. The findings suggest multiple pathways to institutional responsiveness – involving the convergence of multiple, mutually reinforcing factors. If one factor does stand out, however, it is government involvement, insofar as four of the six cases of government-led voice platforms were associated with high rates of service delivery responsiveness.

3 Conclusion

This study reviewed cases of ICT-enabled voice platforms where evidence of institutional response was available. As suggested in our introduction, in the 'yelp' feedback loop model, proponents tend to assume that user feedback to identify service delivery problems is sufficient to induce service providers to respond. This review of the evidence from 23 ICT-enabled platforms finds that this implicit market model, in which (individual) demands for good services produces its own supply, is not sufficient to leverage institutional response. That leaves open the question of what determines the 'supply' of institutional responsiveness, and how ICT-enabled voice platforms can make a difference.

The determinants of service provider agency responsiveness to citizen feedback can be understood as involving both willingness and capacity. The first refers to intent and motivations, the second refers to the leverage provided by institutional tools to translate that into actual practices. In some cases, institutional design8 and a strong sense of commitment to organisational mission at the top encourage willingness to respond. In these cases, the key role of ICT platforms is to bolster capacity to respond – as with MajiVoice's water provision in Kenya. Some policymakers may come from professions with a strong sense of mission, while others may be more concerned about the potential political risk associated with dissatisfied citizens. Systematic collection of feedback, if it reveals both the depth and breadth of citizen concern, can appeal to either set of motivations – professional commitment to mission, or political risk aversion. These two sets of motivations for responsiveness do not appear to be directly influenced by ICT voice platforms.

In contrast, the determinants of senior manager capacity to respond to citizen voice are different. Platforms' institutional and technical design features will determine the precision with which user problems are identified, which is crucial to identify which service providers are responsible. The cases studied suggest that it is crucial for user complaints to be routed to entities within the service providing agency that have some incentive and capacity to respond. Specifically, experiences with the most high-impact platforms, such as the Dominican electricity agency and MajiVoice in Kenya, suggest that direct links between governmental feedback reception systems and internal work order systems greatly increase policymakers' capacity to determine whether and how complaints have been resolved, which appears to be a necessary condition for effective institutional response. Similarly, two of the most successful CSO platforms – Por Mi Barrio in Uruguay and I Change My City in India – are connected to existing governmental service provider complaint systems. These are examples of the institutional questions that play crucial roles as intervening variables that shape whether or not voice triggers teeth to act. The proposition that emerges here is that regardless of their motivations, policymakers with a commitment to bolstering institutional responsiveness should in principle have an incentive to: (1) institute tracking systems that directly link complaints to institutional responses and (2) to publicly disclose both citizen feedback and data regarding institutional response – in order to both inform and validate subsequent citizen action, and to potentially 'name and shame' non-performing units with their agency.

To conclude, the empirical evidence available so far about the degree to which voice can trigger teeth indicates that service delivery user feedback has so far been most relevant where it increases the capacity of policymakers and senior managers to respond. It appears that dedicated ICT-enabled voice platforms – with a few exceptions – have yet to influence their willingness. Where senior managers are already committed to learning from feedback and using it to bolster their capacity to get agencies to respond, ICT platforms can make a big difference. In that sense, ICT can make a technical contribution to a policy problem that to some degree has already been addressed.

In summary, ICT platforms can bolster upwards accountability if they link citizen voice to policymaker capacity to see and respond to service delivery problems. This matters when policymakers already care. Where the challenge is how to get policymakers to care in the first place, then the question is how ICT platforms can bolster downwards accountability by enabling the collective action needed to give citizen voice some bite.

Notes

* This article is a substantially abridged version of a study originally prepared as a background paper for the 2016 World Development Report (Peixoto and Fox 2015). This longer version includes the full database of cases studied, including the rationale for coding the cases and data sources for each case. Thanks very much to Brendan Halloran and Rosie McGee for their precise comments on an earlier version.

1. This also included an international platform, Change.org. The data analysis in that case referred to a total of 132 countries (World Bank 2014b).
2. Making All Voices Count is supported by DFID, USAID, Sida and the Omidyar Network.
3. The current enthusiasm – among development stakeholders and the media – over the potential of technology in citizen participation in the developing world is reminiscent of the wave of optimism surrounding such initiatives in Europe over the past decade, despite the significantly less favourable conditions of developing countries. Even in Europe, with generous funding and a more favourable institutional and technological context, most experiences present limited results at best (see, for instance, Prieto-Martín, de Marcos and Martínez 2011; Susha and Grönlund 2014; Diecker and Galan 2014).
4. Note that this widely assumed causal mechanism does not distinguish explicitly between two different kinds of accountability – preventative (reforms that make future transgressions more transparent) and reactive (answerability and the possibility of sanctions).
5. Note that this usage differs somewhat from the dichotomy between 'individual action = user/client/beneficiary' and 'collective action = citizen'. The terms as used here recognise that citizens can express voice as individuals, but suggests that for citizen action to be considered civic it should be public and collective (though possibly anonymous – as in the case of voting). For a comprehensive discussion, see Cornwall (2002), among others.
6. Virgilio Reyes, summary of statistics sent to author, personal communication, 17 November 2014.
7. We do not assess, however, levels of budget execution.
8. In the case of MajiVoice, for instance, degrees of responsiveness can be explained by the modality of contracts between government and service providers (renewable upon performance) as well as the creation of an oversight structure to monitor government response. See Belcher and Abreu-Lopes (2016, forthcoming).

References

Bayern, J. (2015) Investigating the Impact of Open Data Initiatives: The Cases of Kenya, Uganda and the Philippines, Washington DC: World Bank

Belcher, M. and Abreu-Lopes, C. (2016, forthcoming) 'MajiVoice Kenya: Better Complaint Management at Public Utilities', Digital Engagement Evaluation Team document, Washington DC: World Bank

Berdou, E. and Abreu-Lopes, C. (2015) 'The Case of UNICEF's U-Report (Uganda): Final Report to the Evaluation Framework for Digital Citizen Engagement', Digital Engagement Evaluation Team document, World Bank, unpublished

Bhatti, Z.K.; Zall Kusek, J. and Verheijen, T. (2015) Logged On: Smart Government Solutions from South Asia, Washington DC: World Bank

Brockmyer, B. and Fox, J. (2015) Assessing the Evidence: The Effectiveness and Impact of Public Governance-oriented Multi-stakeholder Initiatives, London: Transparency and Accountability Initiative, www. transparency-initiative.org/reports/assessing-the-evidence-the­effectiveness-and-impact-of-public-governance-oriented-multi­stakeholder-initiatives (accessed 6 October 2015)

Carothers, T. and Brechenmacher, S. (2014) Accountability, Transparency, Participation and Inclusion: A New Development Consensus?, Washington DC: Carnegie Endowment for International Peace,
http://carnegieendowment.org/files/new_development_consensus.pdf (accessed 6 October 2015)

Cornwall, A. (2002) Beneficiary, Consumer, Citizen: Perspectives on Participation for Poverty Reduction, SIDA Studies 2, Stockholm: Swedish International Development Agency

Diecker, J. and Galan, M. (2014) '"Creating" a Public Sphere in Cyberspace: The Case of the EU', in E.G. Carayannis, D.F. Campbell and M.P. Efthymiopoulos (eds), Cyber-Development, Cyber-democracy and Cyber-defense, New York NY: Springer

Fox, J. (2014) Social Accountability: What does the Evidence Really Say?, GPSA Working Paper 1, Washington DC: World Bank Global Partnership for Social Accountability Programme

Fox, J. (2007) 'The Uncertain Relationship between Transparency and Accountability', Development in Practice 17.4: 663–71

Fung, A.; Gilman, H.R. and Shkabatur, J. (2013) 'Six Models for the Internet + Politics', International Studies Review 15: 30–47

Fung, A.; Graham, M. and Weil, D. (2007) Full Disclosure: The Perils and Promise of Transparency, Cambridge: Cambridge University Press

Gaventa, J. and McGee, R. (2013) 'The Impact of Transparency and Accountability Initiatives', Development Policy Review 31(S1): 3–28

Gigler, S. and Bailur, S. (eds) (2014) Closing the Feedback Loop: Can Technolog y Close the Accountability Gap?, Washington DC: World Bank

Grandvoinnet, H.; Aslam, G. and Raha, S. (2015) Opening the Black Box: Conceptual Drives of Social Accountability Effectiveness, Washington DC: World Bank

Joshi, A. (2014) 'Reading the Local Context: A Causal Chain Approach to Social Accountability', IDS Bulletin 45.5: 23–35

Lieberman, E.; Posner, D. and Tsai, L. (2014) 'Does Information Lead to More Active Citizenship? Evidence from an Education Intervention in Rural Kenya', World Development 60: 69–83

Masud, M.O. (2015) Calling Citizens, Improving the State: Pakistan's Citizen Feedback Monitoring Program, 2008–2014, Princeton NJ: Princeton University, Innovations for Successful Societies, http://successfulsocieties.princeton.edu/publications/calling-public­empower-state-pakistan (accessed 13 October 2015)

Mellon, J.; Peixoto, T. and Sjoberg, T. (2015) 'The Crowd Never Lies? Evaluating the Quality of Crowd-sourced Data in Uganda', Digital Engagement Evaluation Team document, World Bank, unpublished

Peixoto, T. (2013) 'The Uncertain Relationship Between Open Data and Accountability: A Response to Yu and Robinson's The New Ambiguity of "Open Government"', UCLA Law Review Discourse 60: 200–48

Peixoto, T. and Fox, J. (2015) 'When Does ICT-enabled Citizen Voice Lead to Government Responsiveness?', Background Paper, 2016 World Development Report, World Bank Digital Engagement Evaluation Team, unpublished

Prieto-Martín, P.; de Marcos, L. and Martínez, J.J. (2011) 'The e-(R) Evolution Will Not Be Funded', European Journal of ePractice 15: 62–89

Ranganathan, M. (2012) 'Reengineering Citizenship: Municipal Reforms and the Politics of "e- Grievance Redressal" in Karnataka's Cities', in R. Desai and R. Sanyal (eds), Urbanizing Citizenship: Contested Spaces in Indian Cities, Thousand Oaks CA and New Delhi: Sage

Spada, P.; Mellon J.; Peixoto, T. and Sjoberg, F.M. (2015) Effects of the Internet on Participation: Study of a Public Policy Referendum in Brazil, World Bank Policy Research Working Paper 7204, Washington DC: World Bank

Susha, I. and Grönlund, Å. (2014) 'Context Clues for the Stall of the Citizens' Initiative: Lessons for Opening Up E-participation Development Practice', Government Information Quarterly 31.3: 454–65

UNICEF (2012) U-report Application Revolutionizes Social Mobilization, Empowering Ugandan Youth, www.unicef.org/infobycountry/uganda_62001.html (accessed 13 October 2015)

World Bank (2015) 'Digital Analysis', World Bank Digital Evaluation Team document, unpublished

World Bank (2014a) Strategic Framework for Mainstreaming Citizen Engagement in World Bank Group Operations, Washington DC: World Bank, https://openknowledge.worldbank.org/handle/10986/21113 (accessed 13 October 2015)

World Bank (2014b) Survey Report: Citizen Feedback Monitoring Program, Washington DC: World Bank

Copyright Information

CCBY

© 2016 The Authors. IDS Bulletin © Institute of Development Studies

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non Commercial 4.0 International licence, which permits downloading and sharing provided the original authors and source are credited – but the work is not used for commercial purposes. http://creativecommons.org/licenses/by-nc/4.0/legalcode

The IDS Bulletin is published by Institute of Development Studies, Library Road, Brighton BN1 9RE, UK

This article is part of IDS Bulletin Vol. 47 No. 1 January 2016: 'Opening Governance'. 23–40; the Introduction is also recommended reading.