Moving Responsibly at Machine Speed: Navigating AI-Driven Decisions

Artificial intelligence (AI) is well on its way to becoming the most powerful business tool in history, accelerating decision-making and communication to machine velocity.

But what happens to leadership accountability when decisions that once took weeks are made in seconds by algorithms? This increased speed does not reduce responsibility; it magnifies it. And this is why building a foundation of leadership responsibility is essential to successfully navigating AI-driven decisions.

To be sure, AI’s promise is clear – efficiency, growth, and innovation – and the statistics are pouring in to support that promise. According to McKinsey’s recently published report, The State of AI in 2025, nearly 88% of companies globally use AI in at least one business function and 92% plan to invest in generative AI within the next three years. I think those survey numbers will look low in a year, but the generative and agentic AI direction is clear. 

As algorithms increasingly influence decisions, the question then becomes: Who owns the outcomes? The answer lies within us, specifically those of us in leadership roles in business and nonprofit organizations.

In a previous column, I wrote that while change is inevitable, “leadership trust can provide your team the courage to take risks, weather the storm and find the inspiration to follow you into the future.”

That was only a few months ago, and while that principle has not changed, the conditions around it have. The rise of generative and agentic AI means, including the trend toward the use of multiple and custom agents implemented by those without any particular coding knowledge, means that leaders must now build trust at the same pace decisions are made. This includes trust not only in people, but in processes and in systems.

Indeed, as others have recently written, including Forbes contributor Jason Wingard in “Accountable Leadership is the New Currency,” leaders must ensure that AI aligns with organizational values, regulatory standards, and societal expectations because, without transparency and ethical oversight, speed can become a liability.

As Peter Hinssen argues in The Uncertainty Principle, the future is not about eliminating uncertainty but learning to operate within it. AI does not reduce uncertainty; it compresses the time leaders have to respond to it, making adaptability and resilience essential.

There is little doubt that AI can compress decision cycles and scale communication exponentially. Leadership at machine speed requires the discipline to consider introducing “pause points,” intentional moments where humans reinsert judgment to prevent silent, high-velocity errors.
 

A Case Study

Take one major airline’s AI misstep as a cautionary tale. In early 2024, it faced a lawsuit after its AI-powered chatbot gave a passenger incorrect guidance on bereavement fare refunds. Trusting the chatbot, the traveler purchased a full-price ticket, only to be denied the promised refund later. A court ruled in favor of the passenger, ordering compensation. 

The incident did not stem from malice or negligence; it was a system working “as designed” without appropriate human-in-the-loop controls. The reputational and financial fallout was swift, underscoring that in the age of machine speed, responsible leadership means anticipating, not just automating.
 

Takeaway for Leaders

While we as leaders often tend – and rightly so – to avoid absolutes, what is increasingly clear to me is that we cannot completely delegate accountability guardrails to algorithms, particularly when it comes to AI ethics governance and compliance, as pointed out in a new Gartner report. The report, which argues why ethics, governance, and compliance must evolve, reveals that fewer than 25% of IT leaders feel confident in managing AI governance.

For forward-looking organizations to address such a leadership gap, this means:
  • Governance frameworks with cross-functional AI oversight boards including legal, risk, ethics, and operations leaders, not just IT or data scientists.
  • Human in the loop ownership with the assignment of clear responsibility for AI system’s behavior and impact.
  • Bias mitigation with regularly audited AI outputs for fairness and compliance.

But governance alone is not enough. Leaders must create a culture where teams feel empowered to question AI outcomes, slow the pace when something seems off, and flag risks early—especially when machine speed compresses time for thoughtful review.

At my nonprofit, we see firsthand how quickly trust can falter. Our work has always centered on independent accountability and building trust in the marketplace. AI does not change that mission; it underscores its urgency.
 

Leadership Imperatives for a Machine-Speed Operating Reality

To lead responsibly in this new reality, business and nonprofit executives must:
  1. Own the Outcomes: AI does not absolve responsibility. Algorithmic decisions should trace back to appropriate human-in-the-loop engagement and guardrails.
  2. Communicate Clearly: Internally and externally, explain how AI is used and what safeguards exist.
  3. Design for Resilience: Treat AI governance like cybersecurity, embedding oversight, audit, and assurance.
  4. Build a Culture of Responsible Speed: Internally, encourage teams to raise concerns and question AI outputs. Externally, demonstrate that rapid automation does not diminish transparency.

Hinssen reminds us that uncertainty demands resilience. Pause points are not inefficiencies; they are resilience mechanisms that allow leaders to absorb shocks and prevent silent, high-speed errors. Embracing uncertainty as a design principle means building systems and cultures that thrive in ambiguity, not fear it.

AI accelerates decisions, but accountability must keep pace. The question is not whether AI will transform leadership. It already has. The real question is: Are we prepared to lead, and uphold trust at machine speed, responsibly?