blog

Through the People Lens: What hesitation reveals about the system - Full version

Written by Rachel Jannaway | Mar 4, 2026 9:47:34 AM
Through the 5 Lenses – March 2026 Edition

 

 

This is the second in a series exploring each of the 5 Lenses, the framework I use to help boards and leaders see more of the system when navigating complex change.

Last month, I explored the Purpose lens and how purpose can erode under pressure, not because leaders lose sight of it, but because the system makes it harder to use. This month, I turn to the People lens. Not because people are the next item on a checklist, but because how people respond to change is the most honest signal an organisation will ever receive about the conditions it has created.

Most organisations believe they take people seriously during change. They communicate early. They consult. They run training. And yet, when change stalls, the instinct is often the same: the problem must be with the people. They are resistant. They lack confidence. They have not bought in.

The People lens challenges that instinct. It does not treat individuals as obstacles to change. It treats their response as information. It assumes people are behaving rationally.

 

 Confidence is not a personal trait. It is a system condition. 

Recently, I ran a session with a group of charity digital leads exploring what gets in the way of their people being confident with everyday use of AI at work. The room was not hostile to AI. These were thoughtful practitioners, many already experimenting in their own roles. But the dominant emotion was not excitement or resistance. It was responsibility.

When I asked what would help build confidence, nobody asked for better tools or more advanced training. They talked about permission. Whether it was safe to try things. Whether leadership had been clear about what was acceptable. Whether anyone had taken the time to sit alongside individuals and understand what AI might mean in their specific role. One participant raised something the room clearly felt: perhaps confidence also comes from consciously deciding what not to automate, so that people can continue doing the work they value.

That conversation was not really about AI. It was the People lens in action. The hesitation in the room was not a knowledge gap. It was a rational response to unclear permission, unspoken expectations and a felt sense that getting it wrong might carry consequences nobody had named.

Two things became clear. First, confidence sits between organisational conditions and personal experience. Leadership stance, governance clarity and available resources shape the environment. Individual competence, fear of mistakes and responsibility towards those they serve shape how it is experienced. Second, people adapt rationally to the systems they are in. If the system does not make it safe or practical to try something new, most people will not try it, regardless of how many emails say it is encouraged.

 

 What research helps us see more clearly 

The idea that people's responses to change are rational adaptations, not character flaws, is well supported across several disciplines. The People lens helps us interpret that research as practical diagnostic insight.

Self-Determination Theory identifies three psychological needs that underpin sustained motivation: autonomy, competence and relatedness. When any of these are undermined, motivation shifts from internal commitment to surface compliance. A major review in Nature Reviews Psychology found that workplace technologies frequently weaken autonomy in implementation, even when the tools themselves are beneficial (Gagné et al., 2022). Through the People lens, this becomes a simple but demanding question: are we strengthening people's autonomy, competence and connection as we introduce change, or quietly eroding them?

Amy Edmondson's research on psychological safety adds another layer. Psychological safety is not about comfort. It is the shared belief that it is safe to admit uncertainty, question decisions and raise concerns without being punished. The American Psychological Association's 2024 Work in America survey found that psychological safety was a strong predictor of how employees felt about AI at work. Those experiencing lower psychological safety were significantly more likely to worry about being replaced and less likely to feel positive about technology helping them work effectively. Safety and the relationship with a direct manager mattered more than age or technical skill (APA, 2024). In other words, how people are treated shaped confidence more than who they were.

A 2025 longitudinal study of 381 employees found that AI adoption reduced psychological safety over time, increasing symptoms of depression. Ethical leadership significantly moderated this effect: where leaders were transparent and supportive, the negative impact was reduced (Kim et al., 2025). This is not a wellbeing aside. It is a governance finding. How change is introduced has measurable psychological consequences.

Field research by Ranganathan and Ye, published in Harvard Business Review, observed eight months of AI adoption in a 200-person company. AI did not reduce work. It intensified it. Employees worked at a faster pace, took on a broader scope of tasks and extended work into more hours of the day, often without explicit instruction to do so (Ranganathan & Ye, 2026). In organisations already operating without slack, these intensity shifts are absorbed by individuals.

Through the People lens, this is the critical move: when workload increases quietly and boundaries blur, hesitation is not a motivation problem. It is a signal that the system has changed faster than its protections.

 

 The People lens breaks down differently at different levels 

Just as purpose looks different depending on where you sit in an organisation, so does the People lens.

At board level, people often appear as metrics: skills gaps, engagement scores, training budgets. The board's People lens responsibility is stewardship. It is to ask whether organisational decisions are creating conditions where a reasonable person could adapt and sustain the pace being asked of them. The risk is conflating confidence with competence and responding with more training when the real issue is permission, workload or psychological safety. The more searching question is not "have we equipped people?" but "have we designed this change so that it is realistically liveable?"

At line manager level, the People lens becomes immediate. Managers see who has gone quiet, who is complying without conviction and who is quietly overwhelmed. Their responsibility is translation: turning organisational intent into conditions people can work within. The risk here is mistaking permission for communication and responding with reassurance when what people actually need is structural adjustment, protected time or clarity about what will stop. Managers cannot create safety or capacity alone; they can only model it if the level above has genuinely authorised it.

At individual level, the People lens is lived experience. Do I feel able to try this? Do I know what is expected? Will someone help me if I get it wrong? Is anyone asking what this feels like from where I sit?

These are not communication failures. They are structural gaps between intention and lived reality.

 

 Being in the loop is not the same as leading 

The most common phrase in conversations about AI adoption is "human in the loop". It sounds reassuring. But being in the loop usually means reviewing, approving or checking what the system has already shaped. The human is present. The human is not leading.

A better standard is "human in the lead", where people genuinely shape how tools are used, what gets automated and what does not. But that standard only means something if the conditions match it.

In many organisations, the answer is no. What looks like empowerment is often delegation of uncertainty. People are told they are trusted to make judgements about when and how to use new tools, but the boundaries have not been defined, the risk appetite has not been articulated and the workload has not been adjusted to make space for learning. The language of agency is present. The infrastructure of support is not.

The Ranganathan and Ye research illustrates this. Employees were not forced to use AI. Adoption was voluntary. People enthusiastically expanded tasks across boundaries and absorbed additional work. That can look like empowerment. But without guardrails or workload redesign, it became intensification. The initiative came from individuals. The cost was carried by individuals.

When organisations say they want people to lead change but do not redesign workload, clarify risk or model vulnerability at senior level, they have not created the conditions for leadership. They have delegated uncertainty downward. The People lens exposes that gap and asks who is absorbing the difference between rhetoric and reality.

 

 Safety is infrastructure, not atmosphere 

If psychological safety is as important as the evidence suggests, it cannot be treated as a cultural aspiration. It needs to be designed into how work happens.

Edmondson identifies three practices that build safety. First, setting the context by naming why candour matters for this challenge. Second, actively inviting input, particularly from those who are quieter or more junior. Third, responding productively when someone takes the risk of honesty, especially when the news is unwelcome. She describes this as a discipline of pausing before reacting and choosing learning over blame.

For purpose-led organisations, this has particular resonance. Many charities and social enterprises have deeply held values around inclusion, dignity and care. But values held in principle do not automatically translate into felt safety in practice. A team can believe in openness and still punish dissent quietly, through exclusion, withdrawal of trust or subtle signals that questioning is unwelcome. The People lens asks not whether the organisation values safety, but whether safety is experienced in the moments that matter most: when someone says "I am not sure this is working."

 

 Using the People lens in practice 

Using the People lens means asking, at every stage of change: what is the lived experience of this change, and what is that experience telling us about the conditions we have created?

If you want people's responses to inform better decisions rather than simply be managed, four questions matter:

When someone hesitates, do we treat it as data about the system or as a problem with the individual?

Are autonomy, competence and connection genuinely supported, or are we assuming that communication equals support?

Have we named what will stop or slow to make this change sustainable for the people delivering it?

Is there a safe, practical way for people to say "this is not working" without it being treated as resistance?

These questions do not replace good communication, training or engagement. They simply ask whether our efforts are designed around how people actually experience change, or around how the organisation wishes they would.

 

 People are the signal, not the obstacle 

The People lens alone will not resolve unclear authority or fix broken processes. But without understanding how change lands on real people in real roles, those other efforts risk solving the wrong problem.

If you see hesitation, silence or quiet compliance, assume the system is speaking through your people. Before you ask how to increase confidence, ask what conditions you have created. The People lens shifts the question from "how do we get people on board?" to something more uncomfortable and more useful: "what are we currently asking individuals to carry that really sits at an organisational level?"

Next month, I'll turn to the Power lens, where decisions are made, authority is felt, and energy and capacity determine what is actually possible.

 

 References 

American Psychological Association (2024). 2024 Work in America Survey: Psychological Safety in the Changing Workplace. View report

Deci, E. L. & Ryan, R. M. (2000). The "what" and "why" of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4), 227-268. View paper

Edmondson, A. C. (2019). The Fearless Organization: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth. Wiley. View book

Gagné, M. et al. (2022). Understanding and shaping the future of work with self-determination theory. Nature Reviews Psychology, 1, 378-392. View paper

Kim, B. J., Kim, M. J. & Lee, J. (2025). The dark side of artificial intelligence adoption: Linking artificial intelligence adoption to employee depression via psychological safety and ethical leadership. Humanities and Social Sciences Communications, 12(704). View paper

Ranganathan, A. & Ye, X. M. (2026). AI Doesn't Reduce Work - It Intensifies It. Harvard Business Review, February 2026. View article

 

This article was co-created through a human-led process using several AI models – including ChatGPT, Claude, Gemini, and Perplexity – as thinking partners. It reflects our commitment to ethical, transparent, and accountable use of AI, where human judgement, curiosity, and oversight remain central.