Forbes India
The global business landscape is being reshaped by the rapid diffusion of artificial intelligence (AI). As firms across industries embed AI into decision-making, operations, and strategy, business schools face mounting pressure to prepare graduates for an AI-enabled workplace. Nowhere is this pressure more pronounced than in India, where employer expectations are rising sharply and institutions are racing to signal readiness for the AI era.The AI Adoption Index (2024) suggests that most Indian firms have already reached a “mid-enthusiast” stage of AI maturity, creating strong demand for managers who can work confidently alongside algorithms. With India’s AI market projected to reach $17 billion by 2027, the economic case for AI-literate managers is unambiguous. Yet, while business schools increasingly acknowledge this reality, their institutional readiness to implement AI in a meaningful and operational manner remains uneven.A recent MBAUniverse.com survey of 235 faculty members from leading Indian business schools, including IIMs, ISB, XLRI, MDI, IITs, and SPJIMR, captures this ambivalence. While 51 percent of faculty members believe AI will positively influence student learning, only 7 percent consider themselves expert users. Most feel ill-equipped to guide students through the academic and ethical complexities of AI. This gap underscores the need for structured institutional strategies encompassing responsible-use frameworks, faculty training, curriculum integration, and sustained interdisciplinary collaboration. Instead, many schools appear to be responding reactively, driven more by fear of missing out than by genuine capability building.A race fuelled by reputation, not resourcesGlobally, elite institutions such as Harvard Business School, Saïd Business School, Columbia Business School, and INSEAD have launched AI-focused programmes, centres, and initiatives. Their announcements have created a ripple effect across business education, triggering imitation by peers worldwide. Indian business schools, acutely sensitive to rankings, accreditations, and market perception, feel this pressure intensely. In the absence of visible AI initiatives, they risk appearing outdated to applicants, recruiters, and regulatory bodies.This has intensified what institutional theorists describe as mimetic isomorphism: the tendency of organisations to copy prestigious peers to signal legitimacy. Ethical guidelines and AI usage policies developed by institutions such as Harvard, MIT, Stanford, or Wharton are often replicated almost verbatim in Indian contexts, despite vast differences in resources, faculty capability, and student preparedness.The result is a proliferation of polished, forward-looking documents that mask weak internal mechanisms. Trained faculty, assessment redesign, monitoring systems, and governance structures required to make these policies functional are frequently absent. This produces symbolic compliance rather than substantive change.Also Read: AI is cheaper than human labour: Sam AltmanBold declarations, murky detailsAcross business schools, policy documents emphasise AI integration in teaching, assessment, and research. Yet operational details often remain vague. Faculty and students are left grappling with unresolved questions: Which AI tools are permitted in coursework? What constitutes acceptable assistance versus academic misconduct?What kinds of courses genuinely build employer-relevant AI literacy?Policies frequently contain contradictory signals, encouraging experimentation while warning against “overuse” without defining either. This ambiguity forces faculty to interpret guidelines individually, resulting in inconsistent practices across courses and departments. Many institutions rely heavily on AI-detection tools despite evidence of false positives, creating a false sense of control and increasing the risk of unfair academic sanctions. Over time, this erodes trust in institutional policy and exposes the familiar gap between strategic intent and operational reality.Optimistic narratives, structural constraintsBusiness schools often frame AI as a tool for personalised learning, productivity gains, and research efficiency. Less visible in these narratives are the structural constraints that limit execution. While over half of Indian higher-education institutions reportedly use generative AI for content creation, and more than 60 percent permit student use, deeper adoption remains uneven. AI tutoring systems, adaptive platforms, and automated grading are expanding, but often without corresponding pedagogical redesign.Viewed through the lens of neo-institutional decoupling, these developments resemble surface-level alignment rather than structural transformation. Policies assume faculty have the time, skills, and institutional support to redesign courses, evaluate AI-assisted work, and mentor students on responsible use. In reality, many institutions face limited computational resources, uneven digital literacy, shortages of AI-trained faculty, and persistent funding constraints. The result is aspirational policy documents that outpace institutional capacity.Dependence on external vendorsIn the absence of in-house expertise, universities increasingly rely on EdTech firms and technology vendors for policy drafting, faculty training, curriculum design, and data governance frameworks. While such partnerships can accelerate adoption, they also risk producing vendor-driven solutions that prioritise scalability over academic nuance. Standardised frameworks may overlook contextual needs, disciplinary variation, and pedagogical values, further distancing policy from practice.Also Read: Sam Altman On Future Of AI Models India And Democratizing PowerAI’s broader disruption of business educationAI’s influence extends well beyond assignments and assessments. It is reshaping the foundations of business education itself, from quantitative methods and strategy formulation to ethics, governance, and organisational design. As firms embed AI into managerial authority and decision processes, business schools must reassess what it means to train future leaders.Globally, institutions are still catching up. UNESCO reports that fewer than 10 percent of educational institutions have formal generative AI policies. At the same time, student demand continues to rise. GMAC data suggest that nearly 40 percent of global business students want more AI-focused coursework, with demand particularly strong in Asian markets. Employers increasingly seek graduates who combine business judgment with AI literacy, analytical reasoning, and strategic discernment.What leaders say and faculty experienceAACSB data reveal a persistent disconnect between leadership intent and faculty reality. While 85 percent of deans report encouraging AI integration, only 37 percent plan to allocate dedicated funding. Although 78 percent of faculty already encourage AI use for brainstorming and ideation, only 63 percent feel adequately supported. Mandatory AI training remains rare, with only 13 percent of schools requiring it for students, 12 percent for staff, and 9 percent for administrators.Faculty frequently cite time constraints, ethical ambiguity, and limited strategic guidance as key barriers. While many recognise AI’s potential to enhance engagement, concerns about its impact on critical thinking persist. Roughly half believe AI supports higher-order reasoning, while a third worry it undermines it. This mix of optimism and anxiety reflects the transitional stage in which business education currently finds itself.Branding more than blueprintFor many institutions, AI policies function less as implementation roadmaps and more as branding statements. Without sustained investment in faculty development, infrastructure, governance, and assessment redesign, these policies remain symbolic. Preparing students credibly for an AI-enabled future requires institutional adoption grounded in capability, not competition.Some global examples offer useful direction. Columbia’s generative AI policy clearly specifies permissible uses, mandates disclosure, and treats unauthorised use as plagiarism. Harvard and the University of Chicago emphasise accuracy, data protection, and ownership. Duke University’s partnership with OpenAI, through its DukeGPT pilot, demonstrates how controlled experimentation can inform scalable adoption of AI.Indian institutions can adapt these lessons by clarifying use cases, mandating transparent attribution, and redesigning assessments to foreground student reasoning. Initiatives at the Indian Institute of Science, such as reflective writing, in-class analysis, oral defences, and reduced reliance on unsupervised take-home exams, demonstrate how evaluation can evolve responsibly in an AI-rich environment.Towards purpose-driven AI integrationIndia has an opportunity to shape AI governance in business education around learning quality rather than symbolic compliance. For business schools, this means moving beyond declarative statements towards action: clearly defining AI use in coursework, investing in continuous faculty and student training, redesigning assessments to reduce misuse incentives, ensuring equitable access to approved tools, and enforcing policies transparently.Some Indian business schools have begun this journey. Sustained progress, however, will depend on leadership commitment, early stakeholder engagement, faculty empowerment, and a willingness to treat AI integration as an iterative, learning-driven process rather than a one-time reputational exercise. Only then can business schools move from signalling readiness to building it.The article is co-authored by doctoral scholar Gunjan Dandotiya from IMT Ghaziabad and Prof. Kiran Mahasuar from SPJIMR.
Go to News Site