AI That Stays on Your Server: Nextcloud's Approach to Privacy-First AI

The AI assistant market has filled up quickly. Microsoft Copilot, Google Gemini, and a growing list of third-party tools all promise to help you write, summarise, and search faster. Most of them work the same way: your content is sent to a remote service, processed on infrastructure you have no visibility into, and returned to you.

For many use cases that trade-off is acceptable. But for organisations that self-host precisely because they care about where their data goes, it creates a contradiction: you went to the trouble of running Nextcloud on your own hardware to keep your files private, and now an AI feature is sending those files somewhere else.

Nextcloud’s AI integration is designed to avoid that contradiction.

How Nextcloud Handles AI

Nextcloud’s AI features are built around the concept of AI providers — backends that do the actual model inference. The key architectural decision is that these backends can be local. If you run a local model via Ollama or a similar runtime on your own server, Nextcloud’s AI features can use it without any data leaving your infrastructure.

That means:

  • Document summaries are generated on your server, by a model running on your hardware
  • Text generation, translation, and smart search work the same way
  • No API keys sent to OpenAI or Anthropic, no usage data collected by a third party
  • The model, its configuration, and all processed content stay under your control

You can also configure external AI providers if you choose — but the choice is yours, made explicitly, not as a default.

What the AI Assistant Actually Does

Nextcloud’s AI assistant (built into the Hub releases) covers a practical range of tasks:

Text tasks — summarise a document, rewrite a paragraph, generate a draft from a prompt, translate text between languages. These work on any text content accessible to Nextcloud, including documents in collaborative editing.

Smart search — semantic search across your files, rather than keyword matching. Find a document based on what it is about, not just what it is called.

Meeting transcription and notes — Nextcloud Talk (the integrated video conferencing feature) can transcribe calls and generate structured meeting notes. When this runs on a local speech model, the audio never leaves your server.

Image recognition — automatic tagging and categorisation of photos, running locally.

The range of available features depends on which apps you install and which AI provider backends you configure. The architecture is modular: you enable what you need.

Comparing This to Microsoft Copilot

Microsoft Copilot for Microsoft 365 accesses your emails, documents, meetings, and calendar to generate responses and summaries. It is technically sophisticated and well-integrated. It also operates under Microsoft’s terms, on Microsoft’s infrastructure, subject to Microsoft’s data practices and US law.

For organisations that have chosen self-hosted Nextcloud specifically to avoid that kind of dependency, as digital sovereignty principles dictate, Copilot is not a viable add-on. Nextcloud’s local AI approach is.

This does not mean local AI is always better. Running capable models locally requires real hardware — CPU performance or a GPU matters for response speed. A small organisation running Nextcloud on a modest server will get slower or less capable AI results than Microsoft’s cloud-scale infrastructure can deliver.

The trade-off is explicit: you give up some raw capability and convenience in exchange for knowing exactly what happens to your data.

What This Requires in Practice

To run Nextcloud’s AI features with local models, you need:

  • A Nextcloud instance (Hub releases have the most complete AI integration)
  • A local model runtime — Ollama is the most straightforward option, with good support for common open models
  • Enough compute on the server — a modern multi-core CPU works for lighter tasks; a GPU significantly improves performance for larger models and transcription
  • The relevant Nextcloud apps enabled and configured

For clients where we manage Nextcloud, adding local AI features is a configuration task that builds on an existing installation. For new deployments where AI is a priority, we can factor the hardware requirements in from the start.

If you are interested in what Nextcloud’s AI integration would look like for your setup, it is worth a conversation.