How crypto entrepreneur Bill Zanker helped Trump and Melania launch memecoins; as of December 10, $TRUMP was down 92% since its peak and $MELANIA was down 99% (Bloomberg)

How crypto entrepreneur Bill Zanker helped Trump and Melania launch memecoins; as of December 10, $TRUMP was down 92% since its peak and $MELANIA was down 99% (Bloomberg)

Bloomberg : How crypto entrepreneur Bill Zanker helped Trump and Melania launch memecoins; as of December 10, $TRUMP was down 92% since its peak and $MELANIA was down 99% —  No one wants to claim credit for helping the first couple launch cryptocurrencies that plummeted more than 90% from their peak.

Korean AI startup Motif reveals 4 big lessons for training enterprise LLMs

Korean AI startup Motif reveals 4 big lessons for training enterprise LLMs

We've heard (and written, here at VentureBeat) lots about the generative AI race between the U.S. and China , as those have been the countries with the groups most active in fielding new models (with a shoutout to Cohere in Canada and Mistral in France). But now a Korean startup is making waves: last week, the firm known as Motif Technologies released Motif-2-12.7B-Reasoning , another small parameter open-weight model that boasts impressive benchmark scores, quickly becoming the most performant model from that country according to independent benchmarking lab Artificial Analysis (beating even regular GPT-5.1 from U.S. leader OpenAI). But more importantly for enterprise AI teams, the company has published a white paper on arxiv.org with a concrete, reproducible training recipe that exposes where reasoning performance actually comes from — and where common internal LLM efforts tend to fail. For organizations building or fine-tuning their own models behind the firewall, the paper offers a set of practical lessons about data alignment, long-context infrastructure, and reinforcement learning stability that are directly applicable to enterprise environments. Here they are: 1: Reasoning gains come from data distribution, not model size One of Motif’s most relevant findings for enterprise teams is that synthetic reasoning data only helps when its structure matches the target model’s reasoning style . The paper shows measurable differences in downstream coding performance depending on which “teacher” model generated the reasoning traces used during supervised fine-tuning. For enterprises, this undermines a common shortcut: generating large volumes of synthetic chain-of-thought data from a frontier model and assuming it will transfer cleanly. Motif’s results suggest that misaligned reasoning traces can actively hurt performance, even if they look high quality. The takeaway is operational, not academic: teams should validate that their synthetic data reflects the format, verbosity, and step granularity they want at inference time. Internal evaluation loops matter more than copying external datasets. 2: Long-context training is an infrastructure problem first Motif trains at 64K context, but the paper makes clear that this is not simply a tokenizer or checkpointing tweak. The model relies on hybrid parallelism, careful sharding strategies, and aggressive activation checkpointing to make long-context training feasible on Nvidia H100-class hardware. For enterprise builders, the message is sobering but useful: long-context capability cannot be bolted on late. If retrieval-heavy or agentic workflows are core to the business use case, context length has to be designed into the training stack from the start. Otherwise, teams risk expensive retraining cycles or unstable fine-tunes. 3: RL fine-tuning fails without data filtering and reuse Motif’s reinforcement learning fine-tuning (RLFT) pipeline emphasizes difficulty-aware filtering — keeping tasks whose pass rates fall within a defined band — rather than indiscriminately scaling reward training. This directly addresses a pain point many enterprise teams encounter when experimenting with RL: performance regressions, mode collapse, or brittle gains that vanish outside benchmarks. Motif also reuses trajectories across policies and expands clipping ranges, trading theoretical purity for training stability. The enterprise lesson is clear: RL is a systems problem, not just a reward model problem. Without careful filtering, reuse, and multi-task balancing, RL can destabilize models that are otherwise production-ready. 4: Memory optimization determines what is even possible Motif’s use of kernel-level optimizations to reduce RL memory pressure highlights an often-overlooked constraint in enterprise settings: memory, not compute, is frequently the bottleneck. Techniques like loss-function-level optimization determine whether advanced training stages are viable at all. For organizations running shared clusters or regulated environments, this reinforces the need for low-level engineering investment, not just model architecture experimentation. Why this matters for enterprise AI teams Motif-2-12.7B-Reasoning is positioned as competitive with much larger models, but its real value lies in the transparency of how those results were achieved. The paper argues — implicitly but persuasively — that reasoning performance is earned through disciplined training design, not model scale alone. For enterprises building proprietary LLMs, the lesson is pragmatic: invest early in data alignment, infrastructure, and training stability, or risk spending millions fine-tuning models that never reliably reason in production.

Bungie's Marathon will arrive on PS5 and PC in March

Bungie's Marathon will arrive on PS5 and PC in March

Bungie’s Marathon has a new release window. The survival extraction shooter was originally set to hit PlayStation 5 and PC in September , but by June, Sony had delayed it indefinitely . Now, with a plagiarism issue largely in the rearview mirror, Bungie has confirmed that Marathon will arrive in March and and plans to sell it for $40. Alongside the release date and price announcement, Bungie released a 23-minute video that takes a deep dive into the game and shows off the current state of Marathon. New features include proximity chat and a solo mode, while Bungie says it has upgraded the environmental storytelling and visual fidelity. Gritty environments provide a nice contrast to the glossy sci-fi sheen that defined Marathon ’s visual language in our earliest looks at the game . There’s a lot more on deck for Marathon ’s first year including new maps and events. Bungie also plans to release more shells, which are akin to character classes that can be customized by changing your loadout. The Rook shell, for instance, is a new one that the studio has added since the alpha playtests. This shell allows you to join a run that's already in progress. You’ll have a limited loadout, but you’re not really risking anything valuable as you run around to loot items. There’s a lot riding on Marathon . Parent company Sony Interactive Entertainment said last month that Destiny 2 had not lived up to its expectations and it wrote down the value of Bungie’s assets by $204 million. Back in August, Sony asserted more control over Bungie and said the developer was “shifting into a role that is becoming more part of PlayStation Studios.” That’s hardly the only issue Bungie has faced this year. The studio admitted in May that one of its former employees plagiarized the work of artist Fern Hook by enabling it to be used in Marathon ’s in-game textures. Earlier this month, Hook said that Bungie and Sony had resolved the matter “to my satisfaction.” This article originally appeared on Engadget at https://www.engadget.com/gaming/playstation/bungies-marathon-will-arrive-on-ps5-and-pc-in-march-200838462.html?src=rss