Measuring Outages and Voltage Quality in Rural Kenya: Engineering Lessons Learned

Authors
Margaret Odero, Joshua Adkins
Date
Feb 28, 2022
By Margaret Odero, Data Analyst at nLine and Joshua Adkins, CTO at nLine
A screenshot of cumulative sensor installation locations in Kenya from June ’21 — January ’22. Note: sensor locations have been enlarged and obscured to protect the privacy of study participants.
A screenshot of cumulative sensor installation locations in Kenya from June ’21 — January ’22. Note: sensor locations have been enlarged and obscured to protect the privacy of study participants.

Introduction

At nLine we have been deploying sensors to measure distribution outages and voltage quality at households and firms in Accra, the capital of Ghana. However, it is important to understand reliability not just in urban settings, but also in rural settings.
In this blog we will talk about the deployment we have been running in rural Kenya to measure key outcomes from the ongoing expansion of the rural distribution network. We first introduce the academic study for which we are providing our data, and then discuss the steps we took to confirm that our sensors returned the same high quality of data in the rural Kenyan setting as they do in Accra.
Additionally this deployment undertakes a new measurement methodology where we increase the geographic coverage of our sample by picking up and redeploying sensors in new locations. We evaluate the success of this methodology which is uniquely suited for our plug-in sensors and has been expertly executed by our implementation partner REMIT Kenya.
This blog post highlights what we have learned so far during the first eight months of data collection and analysis (sensors are funded to continue rotating among villages and collecting data until at least June 2022). Overall, we see that sensors maintained functionality through their rotations and that the quality of cellular connectivity is sufficiently good in Kenya to receive real-time data from our sensor fleet. We also see that contrary to our expectation, these villages in Kenya did not experience longer outages than Accra during our deployment period.
First, we wish to thank our generous funders, The Applied Research Programme on Energy and Economic Growth (EEG) without the support of whom we would not have had the opportunity to test our measurements in a rural context and gather a novel and important reliability dataset from rural Kenya.

Study Supported by nLine Data

Multilateral organizations often provide aid or financing for development projects in low and middle-income countries. To improve accountability, these organizations often impose stringent conditions that must be met during different stages of project implementation. One such project is the Last Mile Connectivity Project in Kenya, in which the government of Kenya aims to connect all citizens to electricity. The construction of the electricity infrastructure for this project was funded partly by the World Bank and partly by the African Development Bank. Both of these banks had different conditions under which they wanted their funding used, with the World Bank having more stringent contracting requirements.
Donor conditionality — the conditions that multilateral organizations such as the World Bank attach when funding international infrastructure projects — has been contentiously debated by governments, policy makers, and academics ever since its first usage in the 1980s. However, evaluating the impacts of these conditions is difficult without rigorous, independent measures of construction quality on the ground.
Beginning in 2016, a team of researchers from the University of Pennsylvania and the University of California Berkeley, using funding from the Economic Development and Institutions initiative, set out to investigate whether the conditions imposed by the African Development Bank led to different power quality and reliability outcomes than the stricter World Bank conditions. The study focuses on the rural parts of Kakamega, Kericho, Kisumu, Nandi, and Vihiga counties as these have ongoing work funded by both the World Bank and the African Development Bank.
In 2021 this research team partnered with nLine to collect metrics for power quality (voltage and frequency) and reliability (power outage durations and frequencies) in the study counties. We were excited for the opportunity to use our technology suite as a key source of ground-truth data in an evaluation of one of Kenya’s largest infrastructure projects.

Deployment Overview

We have used GridWatch for measurement of power quality and reliability in different locations, the first being in Accra, the capital city of Ghana, where there was a need to independently evaluate outcomes of the Millennium Challenge Corporation-funded Ghana Power Compact. Through funding from The Applied Research Programme on Energy and Economic Growth, we had the opportunity to scale-up and expand the deployment of GridWatch outside of Ghana and provide data for the study in rural Kenya.
Over the course of eight months (June 2021 through January 2022), 100 sensors were deployed in 25 villages (four sensors per village) for two months at a time, covering 100 villages in total. Our unique method of rotating villages allows researchers to increase the breadth of their sample without having to deploy and manage more sensor units. We evaluate the success of this methodology by looking at the sensor uptime maintained throughout the deployment period.
This deployment also represented a good opportunity to adapt and test our sensor hardware in a new context. Anecdotally, we had heard that outages in rural Kenya can last much longer than the outages we observe in Accra, and we changed our sensor firmware to ensure that outage restoration time is properly recorded even in week- or month-long outages.
We also wanted to test the success of cellular coverage in these rural areas. While mobile phone usage in rural Kenya is high, and telecommunications companies like Safaricom provide coverage in the areas where we were deploying, our past difficulties in using global MVNOs (mobile virtual network operators, which are telecom aggregators) led us to have a healthy skepticism of cellular connectivity in new environments. To ensure the return of data even in the case of poor connectivity, we implemented an extended data queuing feature such that data would still eventually be sent back to our servers even if a sensor did not connect to a cellular network until after its deployment period.

Sensor Functionality through Redeployment Periods

To evaluate the success of the new rotation-based deployment strategy, we evaluate the uptime of the sensors over the deployment period. We attempted to determine the main causes of observed failures, hypothesizing that many instances where sensors no longer reported were due to participants unplugging the sensor, an issue we successfully correct for by contacting the participant and asking them to plug the sensor back in.
Figure 1: Sensor uptime graph for seven months of deployment in Kenya. Failures are either due to unplugs (sensed by the accelerometer), sensors temporarily going offline due to un-sensed unplugs (such as those that occur when the wall switch is flipped or a customer’s pre-paid credit runs out), or unknown failures. The dips and rises in the number of deployed sensors towards the end of August and at the end of October are due to planned sensor pick-ups and redeployments.
Figure 1: Sensor uptime graph for seven months of deployment in Kenya. Failures are either due to unplugs (sensed by the accelerometer), sensors temporarily going offline due to un-sensed unplugs (such as those that occur when the wall switch is flipped or a customer’s pre-paid credit runs out), or unknown failures. The dips and rises in the number of deployed sensors towards the end of August and at the end of October are due to planned sensor pick-ups and redeployments.
Overall, except during periods of redeployment, we see 80–90% of the sensors remain online every day. This is similar to the sensor uptime we observe in our Accra deployment. Another source of concern with sensor rotations is that some sensors would be broken or lost at every rotation, and that this attrition would eventually shrink the size of the deployment. While we have not been able to retrieve a few sensors throughout the 8 month deployment period, overall attrition has been less than 5%, and has not impacted the study.

Cellular Connectivity in Rural Areas

Most remote sensor deployments need a communication backhaul to transmit the data they observe. Our sensors send data to our servers over the 2G GSM cellular network. 2G coverage maps provided by our MVNO partner Aeris showed good coverage in our deployment region (see Figure 2), but even so we were anxious that the connectivity story would be different on the ground.
Figure 2: [Left]: Aeris map showing 2G cellular coverage in the five study counties in Kenya (Kakamega, Kericho, Kisumu, Nandi, and Vihiga). The map shows wide coverage, yet it was important to test the level of cellular coverage of this region to ascertain the likelihood of data collection success. [Right]: map showing the locations where the sensors were deployed in Kenya.
Figure 2: [Left]: Aeris map showing 2G cellular coverage in the five study counties in Kenya (Kakamega, Kericho, Kisumu, Nandi, and Vihiga). The map shows wide coverage, yet it was important to test the level of cellular coverage of this region to ascertain the likelihood of data collection success. [Right]: map showing the locations where the sensors were deployed in Kenya.
The sensor collects data on the power state of the grid, the voltage level, and other metadata such as timestamps every two minutes, and relays this to nLine servers where it is stored in a database for analysis. We compared the expected number of samples collected by the sensor to the actual number of samples we received, excluding periods where the sensor was undergoing long-term failure (such as those shown in Figure 1), and plotted it as a cumulative distribution function (CDF) in Figure 3.
notion image
Figure 3: A CDF of the packet reception rate [top] and reporting delay [bottom]. [Top]: A sensor is expected to collect a sample every 2 minutes. We counted the number of samples the sensors collected and compared it to the expected number, excluding long-term failure periods, and this fraction is referred to as the packet reception rate. [Bottom]: The reporting delay is the difference between the time the sample was collected and the time it was actually logged in the database. We see slightly lower performance in Kenya than in Ghana both in PRR and in reporting delay, however we do not anticipate this significantly impacts the quality of our sample.
Figure 3: A CDF of the packet reception rate [top] and reporting delay [bottom]. [Top]: A sensor is expected to collect a sample every 2 minutes. We counted the number of samples the sensors collected and compared it to the expected number, excluding long-term failure periods, and this fraction is referred to as the packet reception rate. [Bottom]: The reporting delay is the difference between the time the sample was collected and the time it was actually logged in the database. We see slightly lower performance in Kenya than in Ghana both in PRR and in reporting delay, however we do not anticipate this significantly impacts the quality of our sample.
 
The packet reception rate of most sensors in Kenya is quite high, with over 90% of the sensors having a PRR of greater than 90%. The PRR of the Kenya deployment is slightly worse than that of the Accra deployment, but not significantly so.
While packet reception rate is a good indication of overall sensor performance, it is not necessarily a good indication of strong cellular connectivity because the sensors were redesigned for this deployment to queue and later transmit data if the cellular network is down. Therefore, to determine the quality of cellular connectivity, we recorded both the time the sensor collected samples and the time these samples were actually logged in the nLine database. The difference between the collection time and reception time is shown as a CDF in Figure 3 (above).
We see that 90% of packets were received within 1 minute after collection, and 95% were received within a 15 minutes. We estimate that about 85% were received on the first transmission attempt. This confirms that the cellular network present in these villages is as extensive and accessible as claimed by our global MVNO partnerhowever we also see that reporting delays and the number of measurements that were received on the first transmission attempt is lower in Kenya than in Accra, and therefore the sensors did benefit from the extended data queuing functionality implemented for this deployment. The delays indicate distance to a cellular tower is higher or available capacity of the network is lower in rural Kenya.

Power Outage Duration Measurements

The sensor is designed such that when power goes out, the sensor uses an internal battery to continue reporting until the power is restored, allowing us to calculate outage duration. We increased this battery life to ensure that duration calculation would still be accurate even if very long outages occur.
We expected outage durations in rural Kenya to be longer than those in Accra based on observations from other researchers and anecdotal evidence from our field teams. This is quite intuitive, as rural outages may occur farther from service centers or may impact fewer customers and therefore be a lower priority for repair.
Using our data we compare the outage length distribution in rural Kenya and Accra for the first time. To ensure that very long outages are not being incorrectly sensed by the sensors, we modify our analysis code to detect outages even if they do not contain precise outage restoration time stamps (sensors will eventually report their power restored even if the internal battery dies). Figure 4 shows a CDF plot and histogram of the outage durations in the two country deployments over the same period of time (June 2021 to December 2021).
 
Figure 4: [Left]: A CDF plot of outage durations measured in rural Kenya and in Accra, Ghana during the same time period. [Right]: Histograms of outage durations measured in rural Kenya and in Accra, Ghana during the same time period (up to the 95th percentile, linear scale).
Figure 4: [Left]: A CDF plot of outage durations measured in rural Kenya and in Accra, Ghana during the same time period. [Right]: Histograms of outage durations measured in rural Kenya and in Accra, Ghana during the same time period (up to the 95th percentile, linear scale).
From our data, we see that outage durations in Kenya are shorter on average than those we observe in Accra during the same period. This is contrary to the expectation that the outage durations in a rural area would be longer. While the work done to ensure the sensors continue to function during very long outages may not be necessary in these locations in Kenya, it will certainly ensure that GridWatch sensors function well in future deployment contexts.

Concluding Thoughts

With every new deployment we grow our confidence in the ability of GridWatch sensors to provide critical power quality and reliability data around the world. We are thankful for the opportunity to test our methodology in a new context, and excited to see the full results of the study on donor conditionality during the Last Mile Connectivity Project.
If you are looking to collect power quality and reliability data in a context that would be difficult to reach with existing sensing technologies, please reach out to us at info@nline.io.