Blog Post by Jacques Mailloux, CIO, Elections Canada (formerly of CIDA)
Source: http://jackm2020.wordpress.com/2014/06/09/technology-salon-canada-was-in-ottawa-on-june-5th-2014/
Technology Salon Canada was in Ottawa on June 5th, 2014
Last week I had the pleasure and honor of being one of the lead “discussants” at the very first Technology Salon Canada (TSC) event held in Ottawa. The subject of the Salon was “How can Technology Improve Monitoring & Evaluation in International Development?”. As the former Chief Information Officer of the Canadian International Development Agency, I relished the opportunity to explore ICT4D (Information and Communications Technology for Development) issues and challenges with my former colleagues. International Development professionals are a passionate bunch who live, breath and feel their mandate. I was fortunate to have been referred to TSC by a former colleague, who I discovered last week has failed at retirement, and continues to bless the world with his experience and knowledge doing contract work.
Any event that promises to bring passionate people together to exchange, learn and explore challenges interests me. Making new people connections is a bonus. This event did not disappoint. The protocol for a Technology Salon is described on the sponsor website (http://developmentwisdom.weebly.com/technology-salon.html) as follows:
“The Technology Salon™ is an intimate, informal, and in-person, discussion between information and communication technology experts and international development professionals, with a focus on both:
The subject of this Technology Salon was: “How Can ICT (Information and Communication Technologies) Improve Monitoring and Evaluation (M&E)?” My observations on this subject, as I fully disclosed at the onset, were from my role as CIO at the former Canadian International Development Agency (CIDA).
I feared that in the presence of experienced international development professionals with much more experience on the ground, that my limited perspective might place me in the proverb of the blind man, holding the tail of the elephant and describing it as a rope-like creature.
The conveners deftly managed the discussion within the established guard rails, and gave everyone a chance to express their perspectives.
With my experience being more ICT than M&E, I felt more comfortable as a discussant initiating a conversation on the state of the union of ICT, as I believe that the ubiquity of mobility, the increasing ease and pervasiveness of mobile application development capabilities have put the capacity for monitoring and evaluation in very real time closer to the ground than ever before possible.
It could be argued that in the context of this discussion, such a claim is an understatement, as easily as it could be argued that it is an overstatement.
Encouraged to bring examples to the discussion, I referenced how impressed I was during a visit to east Africa where I met aid workers from an NGO focused on enabling front line health technicians with mobile diagnostic tools, specifically remote pre-natal ultrasound scanning. Their mission was spurred by the realization that bricks and mortar hospitals outside the cities bordered on useless – there weren’t sufficient qualified doctors and nurses to provide diagnoses and care who would locate there, whereas appropriately equipped front line health practitioners and technicians who were there, or who could travel there, could effectively administer mobile pre-natal ultrasound scans remotely and transmit results back to wherever they needed to be, in order to be assessed, using newly deployed wireless & cellular infrastructure.
The “art of the possible” for ICT4D will continue alive and well as new innovations emerge.
There were many examples with respect to natural disaster response, where mobile-ICT supported crowd-sourcing data collection, through Open apps or public social media, had provided intelligence to responders supporting effective deployment of humanitarian assistance – the tsunami, the Haiti earthquake, the Pakistan flood – taking advantage of the fact that even though shoeless and perhaps even pocketless, many would still hold a mobile phone.
What these examples weren’t, however, were good examples of effective use of ICT for monitoring and evaluation in the sense of measuring program or project results, or even transparency and accountability. I chose to introduce, in good Oxford-style debate format, the following resolution, inviting participants to pick a side and discuss.
“Be it resolved that ICT has thus far FAILED in effectively supporting monitoring and evaluation of aid delivery programs?”
It was especially interesting that much of discussion that ensued had most participants presenting facts in support of both sides of the resolution. Fact is, there are successes and failures, opportunities found and opportunities lost. A lot of the discussion centered around the fact that individual organizations have trouble practically defining in a structured way the data and information that enables monitoring and evaluation of their aid programs in a way that enables effective attribution of results and indicators accurately to programs and projects. The issue often would be in the establishment of unmeasurable target outcomes, defining real baseline values or defining measurement frameworks.
Meeting this challenge in many organization often turned into a tail chasing exercise. Some described it as inability of IT to deliver on the requirement. Some described it as inability of program and policy folks to describe a structured framework that could lend itself to automation. Challenges were presented of effective data collection, data management, data quality, visualization and interpretation. Reassuringly, a few talked of their “super IT people” who could listen, learn, understand and work with them to conceptualize a real solution.
Points were raised about the poor quality of quantifiable real evidence. Of more concern were cases where a lack of political courage to present actual available results, when indeed they were available, lead to time and effort spent developing anecdotal headlines of little real value.
The issues individual organizations face defining and managing effective data is a microcosm of the issue presented to the international aid community. How can monitoring and evaluation be effectively enabled across and between donors working in the same sector and in the same country, if there is a challenge with attribution of results and indicators to programs and projects within a single donor. Do political and inter-organization competitive pressures impede even the desire to enable a framework that allows for cross country comparisons?
Why not unite in solving this challenge? As a strong supporter of the International Aid Transparency Initiative, and a believer in the value of expanding international open data and reporting standards into other domains, I took the opportunity to promote once again the IATI Open Data example as the path towards not just transparency and accountability, but also towards meaningful aggregate global and country reporting. International development professionals involved in RBM and M&E frameworks ought to collaborate at every opportunity with IATI technical advisors; join the codefests and hackathons and support the technicians and programmers to conceptualize a complete support framework. I know there are good people working on this.
My opinion derived from own personal experience is that efforts need to be focused on the “I”, rather that the “C” or the “T”. Technology and Communications are NOT the hurdle today. The “C” and the”T” are currently enabling wondrous innovation with big data streaming across the internet and landing in massive data stores. The pipes are bigger – capacity to store and manipulate structured machine readable data has never been better. The current bottleneck to enabling monitoring and evaluation is the inadequacy of the information architecture, and the lack of a generally accepted international standard defining purposeful data. More detailed and better quality information from partners in development, that standards such as the International Aid Transparency Initiative has the potential to offer, are necessary.
As we conducted our discussion, it was interesting that Sally Paxton, US Rep for Publish What You Fund was almost simultaneously blogging the following thoughts.
“Because both globally and in the U.S., the state of aid information is still outdated, piecemeal and can’t be compared across donors.
When lack of measurability puts quantitative results measurement, evidence-based decision making and predictability of effectiveness at risk, what real basis do we have for justifying the billions of dollars being spent?
The Technology Salon incited some deep thinking, enabled some excellent sharing and I don’t doubt made a few connections that will be explored further. This in itself is a worthy outcome for the discussion participants. Keep an eye out for Technology Salon Canada, hopefully bringing more interesting engagements to a venue near you.
This entry was posted in Uncategorized and tagged Technology Salon Canada ICT4D IATI OpenData on June 9, 2014.
Technology Salon Canada was in Ottawa on June 5th, 2014
Last week I had the pleasure and honor of being one of the lead “discussants” at the very first Technology Salon Canada (TSC) event held in Ottawa. The subject of the Salon was “How can Technology Improve Monitoring & Evaluation in International Development?”. As the former Chief Information Officer of the Canadian International Development Agency, I relished the opportunity to explore ICT4D (Information and Communications Technology for Development) issues and challenges with my former colleagues. International Development professionals are a passionate bunch who live, breath and feel their mandate. I was fortunate to have been referred to TSC by a former colleague, who I discovered last week has failed at retirement, and continues to bless the world with his experience and knowledge doing contract work.
Any event that promises to bring passionate people together to exchange, learn and explore challenges interests me. Making new people connections is a bonus. This event did not disappoint. The protocol for a Technology Salon is described on the sponsor website (http://developmentwisdom.weebly.com/technology-salon.html) as follows:
“The Technology Salon™ is an intimate, informal, and in-person, discussion between information and communication technology experts and international development professionals, with a focus on both:
- technology’s impact on donor-sponsored technical assistance delivery, and
- private enterprise driven economic development, facilitated by technology.
- Conversation, not presentation
- Intimacy of participants
- Confidentiality of opinions”
The subject of this Technology Salon was: “How Can ICT (Information and Communication Technologies) Improve Monitoring and Evaluation (M&E)?” My observations on this subject, as I fully disclosed at the onset, were from my role as CIO at the former Canadian International Development Agency (CIDA).
I feared that in the presence of experienced international development professionals with much more experience on the ground, that my limited perspective might place me in the proverb of the blind man, holding the tail of the elephant and describing it as a rope-like creature.
The conveners deftly managed the discussion within the established guard rails, and gave everyone a chance to express their perspectives.
With my experience being more ICT than M&E, I felt more comfortable as a discussant initiating a conversation on the state of the union of ICT, as I believe that the ubiquity of mobility, the increasing ease and pervasiveness of mobile application development capabilities have put the capacity for monitoring and evaluation in very real time closer to the ground than ever before possible.
It could be argued that in the context of this discussion, such a claim is an understatement, as easily as it could be argued that it is an overstatement.
Encouraged to bring examples to the discussion, I referenced how impressed I was during a visit to east Africa where I met aid workers from an NGO focused on enabling front line health technicians with mobile diagnostic tools, specifically remote pre-natal ultrasound scanning. Their mission was spurred by the realization that bricks and mortar hospitals outside the cities bordered on useless – there weren’t sufficient qualified doctors and nurses to provide diagnoses and care who would locate there, whereas appropriately equipped front line health practitioners and technicians who were there, or who could travel there, could effectively administer mobile pre-natal ultrasound scans remotely and transmit results back to wherever they needed to be, in order to be assessed, using newly deployed wireless & cellular infrastructure.
The “art of the possible” for ICT4D will continue alive and well as new innovations emerge.
There were many examples with respect to natural disaster response, where mobile-ICT supported crowd-sourcing data collection, through Open apps or public social media, had provided intelligence to responders supporting effective deployment of humanitarian assistance – the tsunami, the Haiti earthquake, the Pakistan flood – taking advantage of the fact that even though shoeless and perhaps even pocketless, many would still hold a mobile phone.
What these examples weren’t, however, were good examples of effective use of ICT for monitoring and evaluation in the sense of measuring program or project results, or even transparency and accountability. I chose to introduce, in good Oxford-style debate format, the following resolution, inviting participants to pick a side and discuss.
“Be it resolved that ICT has thus far FAILED in effectively supporting monitoring and evaluation of aid delivery programs?”
It was especially interesting that much of discussion that ensued had most participants presenting facts in support of both sides of the resolution. Fact is, there are successes and failures, opportunities found and opportunities lost. A lot of the discussion centered around the fact that individual organizations have trouble practically defining in a structured way the data and information that enables monitoring and evaluation of their aid programs in a way that enables effective attribution of results and indicators accurately to programs and projects. The issue often would be in the establishment of unmeasurable target outcomes, defining real baseline values or defining measurement frameworks.
Meeting this challenge in many organization often turned into a tail chasing exercise. Some described it as inability of IT to deliver on the requirement. Some described it as inability of program and policy folks to describe a structured framework that could lend itself to automation. Challenges were presented of effective data collection, data management, data quality, visualization and interpretation. Reassuringly, a few talked of their “super IT people” who could listen, learn, understand and work with them to conceptualize a real solution.
Points were raised about the poor quality of quantifiable real evidence. Of more concern were cases where a lack of political courage to present actual available results, when indeed they were available, lead to time and effort spent developing anecdotal headlines of little real value.
The issues individual organizations face defining and managing effective data is a microcosm of the issue presented to the international aid community. How can monitoring and evaluation be effectively enabled across and between donors working in the same sector and in the same country, if there is a challenge with attribution of results and indicators to programs and projects within a single donor. Do political and inter-organization competitive pressures impede even the desire to enable a framework that allows for cross country comparisons?
Why not unite in solving this challenge? As a strong supporter of the International Aid Transparency Initiative, and a believer in the value of expanding international open data and reporting standards into other domains, I took the opportunity to promote once again the IATI Open Data example as the path towards not just transparency and accountability, but also towards meaningful aggregate global and country reporting. International development professionals involved in RBM and M&E frameworks ought to collaborate at every opportunity with IATI technical advisors; join the codefests and hackathons and support the technicians and programmers to conceptualize a complete support framework. I know there are good people working on this.
My opinion derived from own personal experience is that efforts need to be focused on the “I”, rather that the “C” or the “T”. Technology and Communications are NOT the hurdle today. The “C” and the”T” are currently enabling wondrous innovation with big data streaming across the internet and landing in massive data stores. The pipes are bigger – capacity to store and manipulate structured machine readable data has never been better. The current bottleneck to enabling monitoring and evaluation is the inadequacy of the information architecture, and the lack of a generally accepted international standard defining purposeful data. More detailed and better quality information from partners in development, that standards such as the International Aid Transparency Initiative has the potential to offer, are necessary.
As we conducted our discussion, it was interesting that Sally Paxton, US Rep for Publish What You Fund was almost simultaneously blogging the following thoughts.
“Because both globally and in the U.S., the state of aid information is still outdated, piecemeal and can’t be compared across donors.
- We don’t know, with any detail, what we are spending and with what results;
- We have little information about what other donors are spending – and with what results;
- And recipient countries – where we want and expect that they will some day become self sufficient – often have little idea what donors are spending, let alone know what they plan to spend in the future.
When lack of measurability puts quantitative results measurement, evidence-based decision making and predictability of effectiveness at risk, what real basis do we have for justifying the billions of dollars being spent?
The Technology Salon incited some deep thinking, enabled some excellent sharing and I don’t doubt made a few connections that will be explored further. This in itself is a worthy outcome for the discussion participants. Keep an eye out for Technology Salon Canada, hopefully bringing more interesting engagements to a venue near you.
This entry was posted in Uncategorized and tagged Technology Salon Canada ICT4D IATI OpenData on June 9, 2014.