The Linux community has always had mixed reactions to the arrival of large language models and generative artificial intelligence, but Ubuntu has recently made its position clear through a project discussion post on "The Future Development Direction of AI in Ubuntu": starting with Ubuntu 26.10 "Stonking Stingray" released in October 2026, which is the next important version after 26.04, Ubuntu New AI capabilities will be gradually added throughout the operating system, but these capabilities will be provided as an option rather than a mandatory push.

Jon Seager, the technical leader of the project, said that after entering 2026, Canonical has begun to encourage internal developers to use AI tools more actively, but the focus is not on pursuing superficial data indicators, such as token usage or "how much code is written by AI." Instead, it hopes that engineers will truly understand in depth the scenarios in which AI is effective and in which scenarios it is not ideal, and measure it through actual output. According to him, the company will not force all teams to adopt the same AI technology stack, but will encourage different teams to try different solutions and accumulate more organizational-level experience in the coming months.
Seager also emphasized that Canonical will not push AI into every corner of Ubuntu, but will use "responsibility" and "transparency" as core principles for advancing this work. In terms of model selection, Canonical will give priority to open weight models, open source tool chains, and implementation methods that rely as much as possible on local offline inference; at the same time, when the company evaluates models, it will not only look at whether the weights are open, but will also focus on whether the model license terms are compatible with Ubuntu's values.
According to Canonical's plan, the AI functions in Ubuntu in the future can be roughly divided into two categories: one is "implicit AI functions" and the other is "explicit AI functions." The so-called implicit AI refers to integrating AI into existing operating system capabilities without changing the user's mental model to improve the performance of original functions, such as speech-to-text, text-to-speech, OCR and enhanced screen reading and other accessibility capabilities. Seager believes that such features are more like key accessibility improvements in nature, rather than simply being labeled as "AI"; in many scenarios, they can be efficiently implemented through open source frameworks, open weight models, and local reasoning methods, while taking into account accuracy and efficiency.
Another type of explicit AI functions are new capabilities that are more obviously AI-centric. They may include workflows with certain agent capabilities, such as writing documents, generating applications, automated troubleshooting, and even providing personalized daily news summaries. However, Canonical also admits that such functions are accompanied by higher security responsibilities, so sufficient security mechanisms, isolation and permission control must be established in advance to prevent unexpected side effects. In Seager's words, implicit AI will be used to enhance Ubuntu's existing features, while explicit AI will be gradually introduced in the form of new features.
In terms of specific technical implementation, Canonical plans to continue to promote the "inference snaps (inference Snap package)" that it has introduced before. According to the official statement, this type of Snap allows users to more easily locally call model inference capabilities optimized for specific hardware, reducing the complexity of tossing back and forth between Ollama, Hugging Face, and a large number of quantitative models. For example, after a user installs an inference Snap, if the relevant chip manufacturer has provided adaptation optimization, the system can automatically obtain a model version that is more suitable for the current hardware platform. Additionally, these inference Snaps are subject to the same sandbox isolation rules as other Snaps, reducing the risk of the model having indiscriminate access to native data and system resources.
Seager also mentioned that in the past, if you wanted to fully utilize the capabilities of large models, you usually needed to rely on models with larger parameter sizes. However, recent model progress has shown that small or medium-sized models are continuing to enhance their advanced capabilities such as tool calling. For example, the article states that new models such as Gemma 4 and Qwen-3.6-35B-A3B have demonstrated the ability to call tools and can theoretically be used to search web pages, interact with external APIs and file systems, troubleshoot real-time system issues, and perform reasoning on topics beyond the scope of the original training data. Therefore, one of Canonical’s next focuses is to expand team investment, follow up on the latest model releases as soon as possible, and provide optimized versions for as many chip platforms as possible.
In addition to basic reasoning capabilities, Canonical is also envisioning a more "context-aware" operating system experience. Seager said that as more and more users become accustomed to working with "agents," Ubuntu hopes to present the powerful capabilities that Linux has accumulated over the years to a wider range of people in a way that is easier to understand and use. Officials are planning how to integrate agent-based workflow into Ubuntu, but the premise must still be in line with the usage habits of the Ubuntu user group and respect privacy and security values. In his view, Snap's restricted packaging mechanism, as well as the foundation laid by Ubuntu in recent years to integrate core system functions, will help Canonical achieve this goal in a more secure manner.
The Linux desktop ecosystem has long been known for its fragmentation. This fragmentation has contributed to the prosperity of the ecosystem to some extent, but it has also often complicated the integration experience and frustrated some users. Canonical believes that if large models can be carefully applied to the system level, they may help users more intuitively understand the capabilities of modern Linux workstations, making the Linux desktop more attractive to a wider range of people.
This vision isn't limited to the desktop. Seager mentioned that if you are a site reliability engineer (SRE) managing a large number of Ubuntu machines, the large model may also help in a variety of scenarios, such as interpreting logs during incident handling, speeding up root cause analysis, or performing a series of planned maintenance tasks under strict guardrails. Canonical's goal is to build a capability framework that can adapt to different Ubuntu device forms, so that agents can "work as naturally as Ubuntu's native functions" under different interfaces. He emphasized that handing over some site reliability engineering tasks to agents does not necessarily mean introducing a new risk category, because mature production environments inherently rely on strict access control, audit trails, and clear isolation between observation and execution; what Ubuntu hopes to do is to provide agents with basic capabilities that can operate within existing boundaries, such as read-only analysis, granular permissions, and complete auditing of decisions and results.
From the perspective of usage scenarios, officials envision that in the future users may be able to directly ask their Linux devices to troubleshoot Wi-Fi connection issues, or automatically build an open source software platform that is pre-configured, security-hardened, and has TLS access capabilities. In further scenarios, this kind of capability can even become the entrance for other devices to control the Linux host. The interaction method can be mobile applications, text messages, voice commands and other media.
Of course, Canonical also admits that local reasoning capabilities are closely related to hardware conditions. While companies are working to make it easier to run open-weight models on ordinary consumer hardware, models with smaller parameter sizes currently cannot compete head-to-head with larger models on many tasks. However, Seager believes that this gap is largely just a phased issue; as global chip manufacturers continue to develop new hardware for the consumer market with increasing reasoning capabilities, capabilities that today seem to be only possible with cutting-edge AI infrastructure will gradually become more common in the coming months and even years.
He also specifically pointed out that when discussing AI, we cannot just look at performance, but efficiency must also be considered. Although it is easy for users to directly compare the token generation speed of large cloud models with the performance of local devices, the power consumption of local native accelerators will also be significantly reduced when processing such workloads, which also means that the threshold for use is expected to be further reduced. Canonical predicts that all this will not be completed overnight, but Ubuntu hopes to be ready when conditions are ripe, and cooperation with chip manufacturers and related adaptation work will play an increasingly important role.
Taken together, the signal given by Canonical is clear: Ubuntu does not intend to turn itself into an "AI product", but hopes to gradually introduce AI capabilities in future versions in a more prudent, more controllable, and more consistent with open source values. Officials said that throughout 2026, the team will work around the goal of "allowing Ubuntu users to access cutting-edge AI in a prudent, safe, and open-source-compliant manner." The focus includes engineer education, local efficient reasoning, accessibility enhancements, and a more context-aware operating system experience.