Transparency
My method for understanding the world is fundamentally Bayesian.1 Because of this, when I evaluate if I trust2 something (a statistic, a study, an organization, a person), I try to consider the evidence for and against. I also look at it from a risk perspective - what are the consequences for being wrong? Should I err one way or another because of these consequences?
After mulling these things over, I often default to being stingy with trust. Individuals get more trust, especially via referral from other people I already trust3. But I generally have a strong distrust for the non-people entities (such as corporations)4. And essentially the only way for these entities to change that is to do things explicitly to gain trust. Conceptually I bundle these trust-building actions as "being transparent".
To me, being transparent is showing your reasoning and processes behind decisions. It is clearly stating your values and desires and then mapping that onto your actions. Without transparency, we are guessing at motivations and what may be going on behind the scenes. With it, we can try and reason if everything adds up and convinces us that this is truth.
Of course entities can still lie while pretending to be transparent. Our counter is to try and find the evidence that exposes this lying and spread it widely.
Because of the connectivity of the modern era, this is much easier. We don't need to verify everything ourselves, we just need incentives that encourage some people to examine and verify information5 plus some mechanisms to amplify those voices. If someone finds evidence of wrong doing, it should be much easier for other people to verify that the evidence does actually lean that way. And in this way, credible people can spread their credibility to entities they verify. Larger or more important entities have a wider field of people they affect, which hopefully translates into more people vested in verifying their trustworthiness.
Now transparency is not always easy. You need to invest in mechanisms to show your processes. In a corporate environment, you might be leaking secrets that give advantages to your competitors. I don't love these justifications, but I'm willing to live with them. I'd prefer if corporations found it very difficult to operate without public trust and through this mechanism were forced to be more transparent6. But what about government?
I consider engendering trust as one of government's most important functions. It wields enormous power and a functioning democracy needs assurances that this power is not being abused. This idea isn't exactly new. Governments are aware of it and at least give it lip service. It gets complicated by the fact that government is huge. Conciseness and clarity are both important transparency factors that often seem missing from governmental efforts. Combine this with the fact that our government is multi-faceted and fractured: county, country, city and state may all have different ideas on a topic, not to mention a revolving door of political hand-offs that can erase any transparency you attempt to create four years later. Given this, it isn't surprising that governmental trust to do what is right is low.7
But certainly, without those efforts of transparency I'm not inclined to trust at all.
Wikipedia and LessWrong provide various levels of explanation of Bayesianism. My simplified view is that you should use evidence to justify belief.↩
More realistically, trust should not be a binary. I trust my wife and I trust my friends, but I trust one of these more. The same holds on the other end of this spectrum for distrust.↩
Seeing this makes me understand nepotism better. It's an unfortunate situation and I should probably be a little more willing to trust people in situations of low risk.↩
I used to do my banking with Wells Fargo. Every few years, another situation like this arises, and if my trust for them wasn't already zero, it would drop lower. I now bank with Ally, who, at the very least, is doing a better job of hiding the evidence that I shouldn't trust them. Other common examples of this are any of the GDPR fines of the last few years.↩
For instance, the actions of the supreme court can be fairly hard to analyze. They are fairly public but it is a lot of content and the legalese is often beyond me. So I engage with a blog and podcast that breakdown the aggregate and highlight my focus onto specifics of interest.↩
In some sense this already occurs, but it is much less succesful against monopolies or monopolistic competition such as tech giants, phone and internet companies. If it were more succesful, would Comcast really have any customers?↩
Now of course some of this presumes rational agents. If you insist on distrusting the government regardless of anything it actually does, then the idea of more transparency is useless to you. Being able to change your mind in light of new evidence is the bare minimum for productive discourse - it is unfortunate this minimum is not always reached.↩