If governments heed the growing call for a "data revolution" and open their administrative data, citizens will likely see less corruption and better public service delivery. Researchers will undeniably reap big benefits, as they access exponentially more data with which to test their hypotheses. But even as governments comply and the data becomes available, we have begun to see the cracks in big data and the need to come up with scientific and organizational fixes in order to make the data revolution really increase transparency.

You don’t get much bigger or more open data on development than that produced by MGNREGA, yet it took a technically and politically complex process to make it transparent.

A number of recent articles have exposed the gaps between big datasets and their usability, reliability and utility. Pippa Norris has written on the need to "triangulate" big data with more focused measures to track development; Paul Jasper on the security risks involved, and remembering the still-huge undigitized population; Daniella Ballou-Aares and Tony Pipa on the financial investment required; Rohini Pande and Florian Blum on the political economy considerations involved; and Michelle Chen on ways that big data can reinforce social inequalities – to name just a few.