Laptop docking stations may evolve into ‘AI docks’ with video and apps

For better or worse, laptop docking stations have generally been “dumb” devices. Synaptics and its customers are hoping to change that. Right now, there are two main technologies that “compete” in the docking stations space: USB4 (which Intel puts its own spin on with its Thunderbolt 4 technology) and DisplayLink (a technology Synaptics bought in 2020). Thunderbolt supplies more raw bandwidth to docks, while DisplayLink uses compression technologies to deliver a high-speed video experience that approximates Thunderbolt. Normally, the best Thunderbolt docks compete against DisplayLink docks , and it’s easy to get lost in the raw horsepower that a Thunderbolt dock offers. Synaptics, however, believes its high-speed signaling finesse can give it a leg up in future devices. Synaptics sells its DisplayLink chips to dock makers like Anker, Kensington, Plugable, and Ugreen, making Synaptics the key chipmaker in those docking stations. Although Synaptics does plan a major expansion into the “competing” USB4 technology, it recently showed off a dock concept at a tech exhibition at its headquarters (in San Jose, California) this week, which turned the dock into something like a thin client with basic video and possibly even apps living at the edge. As of now, perhaps the closest approximation to Synaptics’ vision would be the Anker Prime Charger , a 250W USB-C charging dock with an integrated display. But Synaptics was especially proud of its Astra series of IoT SOCs, whose SL2610 series leverages a “Kelvin” NPU that Google contributed to the industry as an open-source design. What do you get when you take a regular dock and add an Astra? A “smart” dock. A Plugable concept docking station with an Astra chip connected. Mark Hachman / Framework Synaptics showed off some office applications, consumer applications, and video all running on an Astra development board connected to a Plugable DisplayLink dock. At CES 2026, the company expects to show off some LLM AI models running on top of that, said Ganesh Tekkatte, director of product marketing at Synaptics. “It’s a traditional dock, but it’s now also an AI-enabled dock,” added Harsha Rao, vice president of high speed interfaces and distributed compute at Synaptics. Synaptics calls this “edge AI,” and it was a key focus for the demonstrations that the company showed off. It all sounds somewhat familiar, with one demonstration showing gesture controls being used to control a consumer video streaming device, with visual recognition coming next. That’s a feature we’ve seen before with the Microsoft Kinect (though, years later, Synaptics can now do it far smaller and for far cheaper). Another opportunity is the automotive space, where your car could recognize you and adjust your seat, heating, and entertainment options differently than with other drivers in your family. In the PC market, Rao said an intelligent dock could replace a business PC in a hotel’s conference center or hotel room, or in a shared business environment. Putting intelligence in a dock could solve three problems: diagnosing any problems that the user might have right at the edge; intelligent bandwidth monitoring and management; and failure analysis of accessories connected to the dock. Intelligent bandwidth management could be an interesting feature, since DisplayLink usually works with a generic 10Gbps USB-C port rather than a specialized Thunderbolt connection. Rao said the dock could recognize that certain apps (like email) could always be routed to a connected 1080p display while more intensive apps (like CAD) be directed to a 4K display. “And the idea is that you could connect that with an on-screen display, because nobody wants to use the joystick [on the back of the display],” Rao said. Rao also said Synaptics is also working on a USB4 solution, which will debut in about a year and a half. He called DisplayLink a “poor man’s GPU, and said that the market was moving toward a more universal USB4 solution that could provide a cost-effective solution to Thunderbolt 4 and Thunderbolt 5. “And now is the time for us to do that.” Edge AI via sight and sound Synaptics also showed off its vision for universal presence detection using the Astra chip. Presence detection is nothing new—in Windows (Settings > Accounts > Sign-in options), you might see an option for “dynamic lock,” which uses a paired Bluetooth phone to detect when you’re nearby. But Synaptics is working with Dell’s Pro series laptops to integrate a presence-detection sensor with the webcam, and is working with Lenovo to add presence detection to Lenovo’s displays as well. Synaptics showing off universal presence detection. Mark Hachman / Framework In the demonstration, the presence detection simply identified which of two displays the user was looking at, and blurred the other. Synaptics has also implemented gesture control for moving the screen and controlling volume function, though its customers aren’t obligated to use it. One of Synaptics’ customers used the webcam for presence detection, but Synaptics would prefer a dedicated edge sensor that could provide the same function for about 20 cents more and up to 50 milliwatts. Synaptics also showed off how a smart display could sense the “owner” of the display and prioritize their voice during a video call. Edge AI can be sued to “lock on” a speciifc user. Mark Hachman / Framework Again, this all feels somewhat familiar. Brian Krzanich’s tenure at Intel was marked by BMX bikers performing stunts on a CES stage , and claims that edge sensors would supply the data for which Intel’s CPUs constantly clamor. “We are the only company that can actually put a solution mindset to this,” Rao said. “What I find out with my docking customers is what their compute needs, then we go talk to the processor [team],” Rao said. “Intel can not do that. The edge is not just coming with a sledgehammer and saying, we’ve got a processor. We actually go to there and say, what is the solution that I can solve with all the LEGO blocks we have?”