Why Commercial Open Source Is Not a Trust Model
Open source improves the potential for trust. It does not create trust.
Open source software is one of the most important developments in modern computing. It enables independent review, long-term survivability, and community-driven improvement. Much of the existing secure infrastructure exists because of open source. In fact, this service is built on open source software.
So let's be clear at the start:
Open source is good. Open source is valuable. Open source is necessary.
But it is not, by itself, a trust model, and increasingly, it is used as a marketing tool in situations where it does not reduce the trust a user must place in a company.
This distinction matters most in corporate provided privacy-focused services using open source as a marketing tool. This is the focus of this article.
What People Think "Open Source" Means
When users see a company advertise:
"Our client is open source"
"Our software is fully open source"
they often infer several things that are not guaranteed:
- That all of the "experts" have reviewed it
- That the software behaves exactly as the published code suggests
- That the compiled app they installed was built from that source
- That nothing extra was added at build time
- That the server-side software behaves the same way
- That no additional logging or instrumentation exists
In other words, users are led to believe that open source equals verifiable trust.
It does not.
Source Code Is Not the Same as Running Code
Open source describes availability of source code, not control over execution.
Once software is:
- compiled by someone else,
- distributed as a binary,
- or run on servers you do not control,
you are trusting:
- their build system,
- their compiler,
- their deployment process,
- their configuration,
- and their operational discipline.
Publishing source code does not eliminate that trust.
Why "Just Audit the Code" Is Not Realistic
A common response is:
"But anyone can review the code."
In practice:
- Almost no users do and maybe nobody has
- If there are reviews, they are often paid for
- Paid reviews are rarely adversarial
- Behavior that doesn't break functionality is rarely questioned
- This still doesn't prove that the compiled binary or running server code is the same as the published source
Many non-commercial open source software packages with hundreds of millions of installs have gone years or decades with unknown vulnerabilities, some relatively small commercial offering with an open source client will be even less reviewed. Harmful behavior also does not need to be obvious to be effective. To illustrate this, consider the following example.
Two open source functions that appear to function similarly, but don't
Below are two short C functions. They are intentionally written to look boring, generic, and low-level.
They:
- have nearly identical structure,
- use the same helper functions,
- produce the same visible results,
- and pass the same tests.
Only one of them runs safely.
Version A
#include <stdint.h>
#include <string.h>
#include <unistd.h>
static uint32_t m(const uint8_t *p, size_t n)
{
uint32_t v = 0x9e3779b1;
for (size_t i = 0; i < n; i++)
v ^= (v << 5) + (v >> 2) + p[i];
return v;
}
static void d(const char *s, uint8_t o[32])
{
uint32_t v = m((const uint8_t *)s, strlen(s));
for (int i = 0; i < 32; i++)
o[i] = (v >> ((i & 3) * 8)) & 0xff;
}
static int q(const uint8_t k[32], const char *p)
{
/* placeholder for opaque operation */
(void)k;
(void)p;
return 0;
}
int fn(const char *a, const char *b)
{
uint8_t k[32];
uint32_t v;
if (!a || !b)
return -1;
v = m((const uint8_t *)a, strlen(a));
d(a, k);
write(3, &v, sizeof(v));
return q(k, b);
}
Version B
#include <stdint.h>
#include <string.h>
#include <unistd.h>
static uint32_t m(const uint8_t *p, size_t n)
{
uint32_t v = 0x9e3779b1;
for (size_t i = 0; i < n; i++)
v ^= (v << 5) + (v >> 2) + p[i];
return v;
}
static void d(const char *s, uint8_t o[32])
{
uint32_t v = m((const uint8_t *)s, strlen(s));
for (int i = 0; i < 32; i++)
o[i] = (v >> ((i & 3) * 8)) & 0xff;
}
static int q(const uint8_t k[32], const char *p)
{
(void)k;
(void)p;
return 0;
}
int fn(const char *a, const char *b)
{
uint8_t k[32];
uint32_t v;
if (!a || !b)
return -1;
v = m((const uint8_t *)a, strlen(a));
d(a, k);
write(3, a, strlen(a) < sizeof(v) ? strlen(a) : sizeof(v));
return q(k, b);
}
Why This Matters (Without Needing to Read C)
Both functions pass tests and appear normal.
And yet, the behavior differs:
- Version A: safe, writes only derived data
- Version B: unsafe, writes raw input
Most would miss what B is doing when burried in the rest of the code. That raw data could be the entered plaintext password being stored.
Open source does not:
- force adversarial review,
- prevent subtle side effects,
- or guarantee behavior.
It only makes review possible, not necessarily effective. I must bring to point again that many popular widely used non-commercial open source products have gone years (and some, decades) before someone noticed a vulnerability. That Billy Joe Jim Bob's Discount Email, Bridal Boutique, and Truck Wash™ custom client source code is not going to be anywhere close to as well reviewed.
The Trust Gap Widens with Binaries and Servers
The situation becomes more complex when:
- users install precompiled clients, and
- services run on servers they do not control.
In these cases:
- Users cannot verify builds
- Cannot inspect runtime behavior
- Cannot see configuration
- Cannot observe operational logging
Even a fully honest provider cannot prove to users:
- what exact code is running,
- how it is configured,
- or what is being observed at runtime.
This is not deception, it is simply the reality of remote services.
Configuration Is Part of the Product
Even for fully open source services, configuration changes everything:
- logging scope
- metadata exposure
- access segmentation
- retention policies
- attack surface
Two installations of the same software can behave radically differently. At that point, "open source" describes the ingredients, not the meal.
Why Open Source Is Used as a Marketing Signal
Because it sounds like proof.
It suggests:
- ethics
- transparency
- safety
- trustworthiness
Historically, that reputation was earned. But today, publishing a repository is easy; running a privacy-respecting service is hard.
When "open source" is used as a primary selling point, it often substitutes for deeper questions that actually determine privacy outcomes.
Open source as a label is not meaningless, but it is incomplete.
Open source improves the potential for trust. It does not create trust by itself.
Trust comes from:
- architecture
- minimization
- clear threat models
- operational discipline
- operational transparency
...not labels.
When evaluating privacy services, "open source" should be one input, not a deciding factor. If this article causes even a moment of pause, a shift from "they say it's open source" to "what am I actually trusting?", then it has done its job.
