The Korea Times
LG AI Research introduced EXAONE 4.5, a multimodal artificial intelligence (AI) model designed to understand and reason across both text and images, marking a significant step in the company’s push to expand its proprietary AI ecosystem. The new model integrates a self-developed vision encoder with a large language model into a single system, forming a vision-language model capable of processing complex documents such as contracts, technical drawings and financial statements with a high degree of accuracy. According to LG AI Research, EXAONE 4.5 outperformed competing models, including those from OpenAI and Alibaba, across a range of benchmarks measuring visual understanding and reasoning. The model recorded an average score of 77.3 across five STEM-related evaluations, surpassing GPT-5 mini, Claude Sonnet 4.5 and Qwen3 235B. Across 13 evaluation metrics — including general visual understanding and document-based reasoning — the model also exceeded the performance of GPT-5 mini, Claude Sonnet 4.5 and Qwen3-VL, the company said. In coding performance, EXAONE 4.5 scored 81.4 on the L
Go to News Site