As governments around the world move to integrate generative artificial intelligence (AI) into the machinery of public decision-making, a new study suggests that confidence in the technology may be lagging behind its promise. A research team led by professor Kim Do-hyung of Kookmin University proposed a framework to measure what it describes as a critical disconnect: the gap between the actual maturity of AI systems and the expectations placed on them by stakeholders. The study, published in the peer-reviewed journal Technovation, argues that this divide could help determine whether the technology advances or falters in high-stakes public research and development programs. Seeking a more systematic approach, Kim and his co-authors devised what they call the “Maturity-Expectation Gap,” or MEG, framework. The model draws on survey responses from experienced evaluators and pairs them with a machine learning analysis of academic research, aiming to measure both how ready the technology is and how much is expected of it. Their findings point to a notable divergence. Different groups — i