Monitoring and Eval in animal advo
According to the Mission Motor:
- high interest in and perceived need for MEL
- probably greater need for modularised MEL tools rather than building complete systems
- Small data so more qual than quan
- They stopped doing a cohort model because no size fits all
Experts indicate spending between 3 and 20% of a program budget on MEL is deemed adequate for ensuring accountability, assessing effectiveness, and providing opportunities for learning and continuous improvement.
Why MEL fails
- MEL being complex and overwhelming
- using the wrong tools
- lack of training
- some types of interventions such as lobbying, are hard to measure
- it’s often difficult to quantify the impact
My thoughts
- Do they want to target high funding, large, but potentially ineffective orgs, because they are the highest leverage for improving MEL?
- Can tiny stats tools improve things as well as qual?
- Can Mission Motor collect org wide data that they share with all orgs to get around small data problem?
AI suggested related notes
These notes appear semantically similar based on Smart Connections embeddings:
- R&D in animal advo (similarity: 60.8%)
- Notes on animal advo and science symposium, 2023 (similarity: 57.0%)
- Social listening for animal orgs - things we've learned (similarity: 56.0%)