Chip wafer shortage will run through 2030 as AI demand overwhelms supply: SK Hynix chief

The global shortage of semiconductor wafers will not ease before the end of the decade, SK Group Chairman Chey Tae-won said, delivering one of the most definitive long-range forecasts yet from the executive of the world’s leading supplier of high-bandwidth memory chips. Speaking to reporters on the sidelines of Nvidia’s GTC Conference in San Jose, California, Chey said the industry faces a wafer deficit of more than 20% and that at least four to five years of capacity building lie ahead before supply can match demand. “The current shortage could continue until 2030,” Chey said, according to a Reuters report . The chairman attributed the squeeze directly to artificial intelligence infrastructure. “AI actually wants to have a lot of HBM, and once you make the HBM, we have to use a lot of wafers,” the report said, quoting Chey. SK Hynix holds a 57% share of the global HBM market and a 32% share of overall DRAM, making it the second-largest DRAM supplier in the world, according to Counterpoint Research . Chey also said SK Hynix was preparing a strategy to stabilise DRAM prices, though he declined to detail it. “I cannot just announce right here, but I guess that our CEO is going to announce a new plan for how to stabilise the price of the DRAM,” he said, Reuters reported. A structural shift, not a cycle Industry analysts largely agree with the direction of Chey’s assessment, if not entirely with its timeline. “This is no longer a cyclical imbalance. It is a structural reallocation of the memory market driven by AI infrastructure economics,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “The biggest mistake right now is to view this as a wafer or DRAM shortage. The constraint is systemic.” Shrish Pant, director analyst at Gartner, offered a more nuanced read. A 2030 horizon, he said, assumes AI demand grows without interruption — a scenario that is not guaranteed. “HBM wafer reallocation is very real and is definitely impacting the market till the end of 2027,” Pant said. “I see a sustained demand for HBM to continue to grow, with more complex, high-performance HBM keeping prices higher.” He added that some rationalisation in AI infrastructure spending cannot be ruled out, and that traditional DRAM prices could improve by 2028 as new fabs — including Samsung’s P5, SK Hynix’s Yongin facility, and Micron’s Boise expansion — come online, though prices would remain above 2025 levels. What makes this shortage different from previous memory cycles is supplier behaviour. Gogia pointed out that memory vendors are locking in multi-year agreements, committing future HBM output well in advance — a pattern inconsistent with cyclical markets. “This is how a strategic resource market behaves when demand visibility is high, and margins are concentrated in a specific segment,” he said. IDC, in a February analysis, projected that 2026 DRAM and NAND supply growth would come in at 16% and 17% year-on-year, respectively, well below historical norms, a consequence of Samsung, SK Hynix, and Micron reallocating cleanroom capacity toward higher-margin AI products. Enterprise buyers caught in the crossfire That capacity reallocation is now working its way through enterprise procurement, creating what Gogia described as a two-tier market: hyperscalers and sovereign-scale buyers who secure capacity early, and enterprises that operate on delayed access, reduced configuration flexibility, and higher costs. “Supply is not just sold. It is reserved ahead of time,” he said. Pant was equally direct on the enterprise dilemma. “There is no silver bullet,” he said. “Its going to be a mix of ensuring supply at any cost for some, absorbing higher prices and passing them for many, and optimizing their bill of materials as well as software and architecture to optimise for memory instead of only optimising for compute.” For organisations still slow to react, he warned, the calculus is straightforward: “Accept higher prices now, or pay even higher prices tomorrow.” For CIOs, Gogia said the shift demands a fundamental change in planning posture. “This is no longer a procurement exercise. It is a supply risk management problem.” Memory, he argued, must be treated as a constrained strategic input — not a commodity — across every infrastructure decision over the next two years. New fabs, new technologies, but no quick fixes New capacity is coming, but not fast enough to change the near-term picture. Samsung’s P5 facility in Pyeongtaek is expected online by 2028 , with SK Hynix also investing heavily in new fabrication capacity. Yet both analysts cautioned that new capacity will largely be optimised for AI workloads, limiting relief for conventional enterprise demand. Emerging alternatives such as CXL memory pooling and processing-in-memory have drawn attention, but neither analyst sees them as near-term relief. Pant noted that architectural shifts of that magnitude happen slowly and require sustained high prices to drive mass investment. Gogia was more pointed: “These technologies will help enterprises adapt and optimise. They will reduce pressure at the margins. But they will not eliminate the structural constraint on memory before 2030.” The geopolitical dimension adds another layer of uncertainty. With HBM manufacturing concentrated in South Korea, US export controls tightening, and China accelerating domestic memory capacity through firms like CXMT, Gogia said memory had “crossed the threshold from a commercial component to a geopolitical asset” – introducing a category of supply risk that traditional vendor diversification alone cannot mitigate. The article originally appeared in NetworkWorld .