November 13, 2025 · ai · privacy

Using AI safely - Our privacy concerns

Artificial intelligence can be powerful and genuinely useful. However, in many of today’s tools it also introduces hidden privacy and security risks. In this article, we explain the concerns we see in the current use of AI and how we approach it within Databeamer.

Using AI safely - Our privacy concerns

We’ve been asked quite often what role AI plays in our Databeamer service. The
short answer: we hardly use it. And we certainly do not use it to scan or
analyze your data, to train models or to create user profiles. Not openly, and
not quietly in the background.

But as AI tools are now everywhere, we also want to give you some advice. If you
care about privacy, data security, and digital independence, it’s worth taking a
closer look at some aspects of how AI-powered services actually handle your
information.

AI is useful but also a risk for your privacy

Artificial intelligence can be powerful and even helpful. But in many of today’s
tools, it also comes with hidden trade-offs.

Most AI powered tools today (even those branded under different names) rely on
the so-called frontier large language models (LLMs) and are mainly created by
American or Chinese companies. This therefore creates privacy and security risks
related to the EU AI Act and GDPR. European alternatives do exists, but at the
moment they are still less common/not well known (see for example Finalist’s
tech blog for 10 European
alternatives
).

In most cases, these models run in the cloud, meaning the service provider sends
your prompts and data to those external systems for processing.Technically it is
possible to self-host such a LLM. True self-hosting (running a copy) is very
rare, as it requires high maintenance and is very costly.

Some companies use the LLM by pseudo self hosting: the company server acts as an
intermediaries or proxy, but the LLM still runs on the frontier AI’s (e.g.
OpenAI) infrastructure. This can create the appearance of local hosting and may
offer limited compliance benefits (such as filtering personal data) but data
still flows to OpenAI’s servers.

Most services opt for the hosted API options which ensures that all computation,
data and prompt handling occurs entirely on the LLM makers servers. The trade
off is clear: when data is processed through those external clouds, it leaves
your controlled environment. The companies behind these AI models may store,
analyze, or reuse that information in ways you cannot fully oversee or control.
In practice, this often means your data leaves the EU and is handled by large
non European technology providers.

Be careful with AI that connects everything

Many users rely on Microsoft or Google for email, documents, and scheduling.
Both companies now promote their AI assistants ( Copilot and Gemini) as helpful
productivity tools. But convenience can come with a cost. Have a look at
Microsoft Copilot as an example.

Microsoft Copilot example

Copilot can automatically access and cross-reference your data across your
entire Microsoft 365 application suite (among others Outlook, Teams, Word, and
OneDrive, depending on your subscription). For businesses, this can mean that AI
suggestions are built from sensitive internal information (e.g. emails,
presentations, or chats).

Copilot offers some options to control what it remembers about you and whether
your conversations are used for model training. However, these are opt-out
settings, so it is uncertain how many users will actively change them. In
addition, opting out of model training does not prevent your conversations from
being used for advertising or other purposes. Changes to these settings may take
up to 30 days before they become fully effective.

For better governance of your privacy and security, Microsoft offers an add-on
suite called Microsoft Purview Suite. Purview is only available for higher-tier
business licenses and requires active setup, meaning many smaller businesses or
personal users don’t have these protections by default. So unless your IT team
has carefully configured Microsoft Purview, Copilot can “see” and probably reuse
that content across your organization.
And even when Purview is enabled and configured, it still depends on employees
correctly labeling documents and emails as sensitive so Copilot will skip them.
But will every employee consistently do this? And what about older or archived
content that remains accessible to Copilot by default? Even with a high level of
privacy awareness in your organization, this is an easy step to overlook.

Bing Copilot and personalized ads

If you use Copilot through Bing or Edge, your prompts can also be used for ad
personalization. Microsoft even notes that the “personalization” settings for
Copilot do not affect whether you receive personalized
ads
.
So, your AI interactions might influence what ads you later see online.

Expanding to other providers

Microsoft also now allows enterprise users to activate Claude (Anthropic) within Copilot.

The reason is that this is part of Microsoft’s move to diversify beyond OpenAI.
But when you enable, your data is not longer in Microsoft space but moves to
Antropic (actually even trough AWS). So when you enable this, you lose your data
governance & control, audit and compliance rules and more and adding the rules
and jurisdiction of another American company. If privacy and compliance are
important to you, this means you’re losing visibility, auditability, and EU
governance.

“Privacy Washing”: The illusion of control

Generative AI models rely on massive amounts of data to learn and improve. This
makes any user data that model providers can access extremely valuable. However,
there is growing scrutiny over what types of data are collected, how it is
obtained, and where it is stored, with legal questions emerging around
copyright, privacy, and data protection. The EU’s investigation into Googles
Gemini

is a good example for this.

To get away with this, "Big Tech" has mastered the art of privacy washing:
presenting themselves as privacy-conscious while quietly collecting vast amounts
of user data. You might see dashboards full of toggles and controls to
customize, but even with all privacy settings maxed out, these companies often
continue collecting usage data, metadata, and behavioral signals.

Be cautious with AI-powered browsers like Atlas

A newer trend we’re seeing is the rise of AI-powered browsers, such as Atlas,
which comes with built-in AI features like ChatGPT integration. Atlas actively
tracks user browsing behavior, clicks, and interaction patterns to personalize
your online experience. While it claims not to store passwords or full personal
data, it’s unclear what information it does retain and for how long.

Its “agent mode”, where the AI can autonomously perform actions on your behalf,
has also raised security concerns in independent research. For instance, a
recent study by the University of Sydney highlighted potential vulnerabilities
related to AI-driven automation and data
exposure
.
In practice, this could mean that such a browser might have visibility into
everything shown on your screen, including unencrypted messages or file content
displayed within secure platforms like Databeamer.

So for now we certainly do not recommend using the Atlas AI Browser to access
Databeamer or actually any other website.

Keeping the human touch

Not everything needs to be automated. We believe that humans (and not
algorithms) should remain at the center of collaboration, decision-making and
certainly creativity. Technology should assist, not replace, and should
certainly not exploit the people who use it.

That’s why we’re deliberate about AI. When we add it, it will be because it adds
genuine value to you, and because we can guarantee it respects your privacy,
your sovereignty, and your trust.

A look ahead

We don’t believe in using AI just because it sounds innovative. But we do
continue to explore responsible, EU-based AI technologies that could enhance
productivity while keeping every process local, auditable, and under user
control.

In our field, we see potential future uses that could genuinely make your work
easier. For example we are now working on:

  • Generating form templates automatically from a spoken or written prompt (as part of our ‘open request’ functionality);
  • Generating rules or regex patterns (as part of our validation functionality within file requests).

When we introduce AI for features like this, it will be:

  • Transparent about what it does.
  • Optional and consent-based.
  • Designed so your data never leaves the EU and never trains a general-purpose model.

Because AI should work for you and not the other way around