How Trust is Broken and Repaired
And why this AI moment is a moment to talk about trust in schools
If you’ve been following this newsletter for a while, you know I’ve spent the bulk of the last six months learning about two topics: AI and trust. Since the onset of the coronavirus pandemic in 2020, I’ve noticed trust has become a bigger issue at schools, both explicitly and implicitly. It seems to influence why, when, and how school communities work well and don’t work well. I’ve also noticed how much issues of AI and issues of trust overlap.
So, I’ve been trying to get a stronger grasp on what trust is, how we build it, and how it is broken and repaired. This is a continuation of my post from October, “What Trust Is and Isn’t,” where I define trust and its major components (care, competence, and integrity). I recommend taking a look at that post before diving into this one.
How Trust is Broken
In his book How Trust Works, Peter Kim defines trust violations as “incidents that damage the positive expectations we might otherwise have about the world.” Violations are usually tied to breaking one of the three elements of trust: care (the perception that a person is invested in your wellbeing and success), competence (a person’s actions and decisions have rigor and logic), or integrity (a person’s behavior is consistent and moral).
The nature and severity of trust violations depend on many factors, like the type of relationship you have with a person or an institution, the length of that relationship, the context and stakes surrounding the violation, and the preexisting strength of the trust that has been violated. A violation can occur in one incident, or it can accrue slowly over many incidents.
Trust violations can be powerful and can be contagious. Kim’s research shows that merely hearing about an unsubstantiated allegation against a person will lower others’ perception of that person’s trustworthiness, even (or especially) if we don’t know anything else about that person. We don’t need to be personally affected by a trust violation in order to mistrust someone and to carry that mistrust with us into other situations. In Mistrust, Ethan Zuckerman expands on this idea by explaining how our biases and stereotypes—about race, about political affiliation, about identity, etc.— and other factors like where we live and what our cultural heritage is can contribute to whom we trust, whom we don’t trust, and how we perceive trust violations.
The lasting impact of trust violations is mistrust, what Charles Feltman defines as “a choice not to make yourself vulnerable to another person’s actions.” Trust is strongly correlated with happiness, productivity, and learning, and a mistrustful culture reveals itself most often in people’s disengagement from those things. People do not invest their abilities, emotions, or effort in environments where they do not trust and do not feel trusted.
How Trust is Repaired
Unless trust is strong and deep, violations can be difficult to repair. First, it can be hard to agree that trust has been violated at all: the person who perceived a violation and the violator may have completely opposing perspectives on the nature and impact of an incident. Second, strategies for repairing trust can backfire. An apology can be perceived as a sincere effort to repair trust or as a confirmation that a person is not deserving of trust. Increased transparency can help people better understand how decisions are made or can reveal flaws in the system that only reinforce mistrust.
A constellation of factors affect how we should approach repairing trust: power dynamics, identity, age, cultural norms, personal experience, etc. But, the research I’ve read seems to agree on a few core elements of repairing trust.
We should address the violation. In Building Trust, Robert Solomon and Fernando Flores coin the term “cordial hypocrisy”: “the strong tendency of people in organizations, because of loyalty or fear, to pretend that there is trust when there is none, being polite in the name of harmony when cynicism and mistrust are active poisons, eating away at the very existence of the organizations.” Cordial hypocrisy is built on our avoidance of directly addressing problems in our culture.
We should confront our own biases and preconceptions. Our own human nature can make it hard to agree on if and how a trust violation has occurred. We all possess explicit and implicit biases, an instinct to self-protect, and a desire for validation of our own worldviews. Part of repairing trust is taking the time to understand the experiences and perspectives of the other people involved, whether those people are the violators or the perceivers of the violation. The Management Center offers some perspective-taking exercises that could be useful in processing a violation of trust and working to repair it. Feltman also suggests the simple exercise of asking the other person how they perceived the violation, and listening to and taking their answer seriously.
We should be vulnerable. Repairing trust requires honest conversation that aims to get at the truth of intent. Kim’s research shows this requires letting go of some of the armor we use to protect ourselves: the use of positional power as a lever, rationalization or defensiveness, and stubborn adherence to particular points of view.
We should be patient. In most cases, trust is not repaired in a single interaction. The issue of trust might need to be revisited several times, and it is the responsibility of all involved in the violation to explicitly commit to the ongoing work of repair.
How to Build a Culture of Trust
Because repairing it can be so difficult, we should be taking proactive steps to build and sustain cultures of trust. A violation should not be the first time we address trust in our culture. Here are two concrete steps I think are relevant to schools:
Put in place structures for trust. In his book, Zuckerman cites Lawrence Lessig’s notion of “codes” for trust, meaning structures built into our shared spaces that signal trust or mistrust, that reveal what an institution values and who is welcome in it. Watch this short video about Circles at Valor Collegiate Academy in Nashville, TN, USA:
These gatherings are “coded” into the schedule and culture of the school, communicating that trust and social emotional wellbeing are institutional priorities. This, and other restorative practices like it, simultaneously nurture a culture of trust and provide a community with a toolkit for addressing trust violations when they occur.
Understand our own trust “wobbles” and “anchors.” When it comes to trust, it’s not enough to work on our schools; we must also work on ourselves. Frances Frei of Harvard Business School has developed a three-pillar framework for building trust, and in this homework assignment for one of her classes she lays out how to use that framework to identify and work on our trust wobbles (where we might lose someone’s trust) and anchors (where we tend to be most trustworthy).
How might trust inform our approach to AI?
When I facilitate sessions on AI at schools, there are two prompts for audience discussion that I use a lot:
Share how you’re currently using AI and what you think of it.
Discuss some use cases I provide and consider whether they constitute “cheating.”
Almost every time I ask, especially if the audience is a mix of students and educators, there’s an awkward pause, some side-eyeing, and a few nervous giggles. I see a lighthearted but clear reluctance to be vulnerable with each other on this topic, and a willingness to be vulnerable is the most important element of trust.
For more than a year, we’ve built a culture of secretiveness, maybe even shame, around AI in schools, probably because we have until recently viewed it only as a tool for cheating. Breaking down that culture will require us to draw on our trust in each other. Designing a more open approach to AI and learning how people are using it productively will require us to draw on our trust in each other. Asking students and colleagues to make informed, responsible decisions about AI will require us to draw on our trust in each other.
There is no single policy solution for a technology evolving as rapidly as AI is right now. Values will serve us better than contracts. Learning will serve us better than punishment. In addition, as I wrote in my first trust post, research shows that creating detailed, compliance-oriented policies works against trust by implying that we operate in a transactional culture of rules rather than a relational culture of care, competence, and integrity.
From Peter Kim: “If we start with the premise that most of us want to be good, then at least part of the solution to the trust challenges we face is to help each other get there… The findings point to how the relatively high levels of initial trust people have been found to exhibit in others generally make sense. We are right to trust each other.”
Upcoming Ways to Connect with Me
Speaking, Facilitation, and Consultation. I’m currently planning for the second half of 2024 (June to December). If you want to learn more about my consulting work, reach out for a conversation at eric@erichudson.co or take a look at my website. I’d love to hear about what you’re working on!
Online Workshops. I’m thrilled to continue my partnership with the California Teacher Development Collaborative (CATDC) with two more online workshops. Join us on March 18 for “Making Sense of AI,” an introduction to AI for educators, and on April 17 for “Leveling Up Our AI Practice,” a workshop for educators with AI experience who are looking to build new skills. Both workshops are open to all, whether or not you live/work in California.
Conferences. I will be facilitating workshops at the Summits for Transformative Learning in Atlanta, GA, USA, March 11-12 (STLinATL) and in St. Louis, MO, USA, May 30-31 (STLinSTL). I’m also a proud member of the board of the Association of Technology Leaders in Independent Schools (ATLIS) and will be attending their annual conference in Reno, NV, USA, April 7-10.
Links!
- recently taught a middle school math class, gave himself a “C-,” and wrote about three things he did that worked. Such a helpful glimpse into the instructional moves that define effective teaching, even when it’s imperfect.
Digital Promise has released a framework for AI literacy. It’s simple and clear, and I also recommend exploring the links they provide: some wonderful resources.
I’ve just started exploring Latimer AI, a LLM intentionally trained on high-quality data to combat bias in AI models and to more fully represent diverse perspectives. For background on it, I recommend this segment from the On the Media podcast.
In this short video tour,
shares his exploration of Lex, an AI writing assistant with some powerful features.If you are interested in issues of copyright, intellectual property, and the way AI models are trained, I recommend following The New York Times’ lawsuit against OpenAI. Here is a good explainer to get you started.
From my local community: the city of New Bedford, the fourth-largest school district in Massachusetts, is down to its last school librarian.