Crowd Sourcing Rangeland Vegetation Conditions
This project crowdsourced information from pastoralists on local vegetation and forage conditions to improve rangeland models being used in the region to monitor for drought-related forage anomalies.
Migrant pastoralists on the arid and semi-arid rangelands (ASAL) of East Africa scratch out a livelihood in a harsh environment where most households depend on livestock to make a living. While the over 8 million pastoralists in Ethiopia and 3 million in Kenya have the potential to make productive use of the vast ASAL as a source of livestock products like milk, meat and skins for markets in the growing cities of East Africa and beyond, they are facing critical challenges due to climate change and land pressures. Regular droughts can kill off vast numbers of livestock, threatening not only the economic viability of the ASAL, but the very food security and survival of its inhabitants.
This project pulls together inputs from machine learning and image classification, rangeland ecology, remote sensing, and applied economics and finance to leverage emerging technologies for the benefit of some of the poorest and most vulnerable people on Earth. While crowd-sourcing applications continue to grow rapidly, still relatively rare are projects that combine systematic data collection with sophisticated real-time analysis both to provide feedback into the source system and for other uses.
Research Activities and Findings
We trained 103 pastoralists in Isiolo County, Kenya, to collect and submit information on rangeland conditions and forage patalability The survey used icons and audio, allowing for illiterate contributors, and ran on simple smartphones, which the project trained on and provided. We also provided participants with budget smartphones and small solar chargers. Over the following 6 months, those participants successfully submitted over 110,000 surveys.
To learn how to address prospective data issues related to non-random sampling (clustered submissions) and poor quality submissions, we developed a research design that included two randomized control trials. The first trial varied the rewards a participant received for making a successful submissions across space and time, and between participants. The reward structures were communicated to participants via a custom developed map that segmented the project area into reward regions, which we could update remotely. The second trial provided varying intensities of feedback to the contributors on their submissions. In the most intense case, submissions of poor quality were manual flagged and their authors were notified of their rate of poor quality submissions.
Analysis of the distribution of submissions showed that our approach of varying rewards successfully reduced clustering of submissions and the feedback treatments improved submission quality along some dimensions of quality. We are currently using the submissions and accompanying images to characterize landcover classes produced by an unsupervised classification of remotely sensed data in the region.
Jensen, Nathaniel, Russell Toth, Yexiang Xue, Richard Bernstein, Eddy Chebelyon, Andrew Mude, and Carla Gomes. Don’t Follow the Crowd: Incentives for Directed Spatial Sampling.
Jensen, Nathaniel, Elizabeth Lyons, Eddy Chebelyon, Ronan Le Bras and Carla Gomes. Monitoring A, While Hoping for A & B: Field Experimental Evidence on Multidimensional Task Monitoring.
Naibei, Oscar, Nathan Jensen, Rupsha Banerjee, and Andrew Mude, A. 2017. Crowdsourcing for rangeland conditions—Process innovation and beyond. ILRI Research Brief 84. Nairobi, Kenya: ILRI.
Blogs and media mentions
Ezra, Cornell’s Quarterly Magazine: High tech helps rural herders. (Vol. VII, No. 4, Summer 2015)
Economics that Really Matters blog: From pastoralists to Mechanical Turks: Using the crowd to validate crowdsourced data. (June 1, 2015)
Cornell Chronicle: Space-age technology points African herders in right direction. (February 18, 2015)
2016 - 2018