In development discourse, “digital preparedness” is often assumed once access and basic skills are present. But are devices and task completion enough to claim readiness?

Understanding where individuals or communities stand in the digital divide requires more than observation or anecdotal field insights. While immersive fieldwork provides rich context, it is resource-intensive and difficult to scale. This is where structured assessment frameworks become critical.

Task-Based Assessments: A Necessary Baseline

Large-scale frameworks such as the International Computer and Information Literacy Study (ICILS) offer standardized tools to measure ICT competencies across countries. These approaches generate comparable data on access and technical ability.

However, they often rely on synthetic, closed-environment tasks. Completing predefined exercises does not necessarily translate into meaningful, real-world agency.

Situational Readiness: The Missing Layer

Frameworks like the Technology Readiness Index (TRI) go further by examining psychological and contextual factors—optimism, innovativeness, discomfort, and insecurity. These dimensions reveal whether individuals are likely to apply technology confidently and independently.

The distinction is crucial: knowing how to use a tool is not the same as using it effectively.

Implications for Monitoring & Evaluation

For M&E professionals, combining task-based and situational assessments is essential. Without measuring both behavioural competencies and psychological readiness, investments risk focusing on surface-level metrics rather than genuine participation.

👉 Read the full article to explore how better measurement can lead to smarter digital development strategies.

Transition from “knowing something” to “actually doing it” in your career

Reply

Avatar

or to participate

Keep Reading