To lead with AI, start smaller

To lead with AI, start smaller

GettyImages-2186780950 To lead with AI, start smaller

In the first week of January, every gym in the country is filled with people who have decided that this is the year they are going to transform their health. They will eat better. They will get more sleep. They will exercise daily. By February, most of these newcomers were nowhere to be seen. Shifting multiple health behaviors at once is very difficult for humans unless a major health event such as a heart attack or diabetes diagnosis forces them to do so. Consistent, incremental change is not only more sustainable, it is also faster, given the setbacks (such as injuries) that can arise from making drastic changes.

The same applies in business. We see this playing out with AI right now, as many companies find themselves caught between two flawed strategies: paralyzing caution, waiting for the technology to be “proven,” and far-flung successes in which massive transformations promise to reinvent entire organizations. Waiting almost guarantees that you will fall behind competitors who have already mastered the technology that will bring about major shifts in business models. At the same time, most research shows that major transformations fail with repetition. It can consume enormous resources – often as much as 10% of annual revenue – only to leave organizations exhausted and often distracted.

What if the way forward for AI is not a major shift, but rather a sharpening day by day?

Sharpening power

In our new book Hoon: How Purposeful Leaders Defy DriftWe believe that organizations must shift from relying on cyclical, comprehensive reinventions to ongoing, targeted micro-adjustments. Shifts may sometimes be necessary, but what we call “sharpening”—making small but intentional changes that build cumulative momentum—is largely underutilized. Just as a chef sharpens a knife daily to keep it in good condition — rather than waiting until it becomes dull and requires a destructive sharpening process — organizations can hone their approach to AI in ways that are less risky, more agile, and ultimately faster and more effective than transformation.

Honing is not the magic of success, but it is no less ambitious. It’s about structuring progress differently: integrating improvement into everyday practices rather than waiting for the perfect consensus, advanced technology, or flawless infrastructure. Ultimately, it is often faster because it avoids setbacks and costly corrections that come from rushing or making sudden changes. By steadily aligning with market shifts and making incremental improvements, teams maintain sustained momentum and can adapt to insights and progress in real-time.

When leaders adopt the mindset of honing with AI, it becomes part of daily organizational work rather than an occasional campaign. Instead of achieving a single accomplishment, focus on a series of small, targeted experiences that build momentum. Here’s what sharpening looks like when applied to AI.

  • Optimize existing systems before aiming for full automation. For many organizations, simply enhancing existing processes with AI – rather than trying to replace them wholesale – can unlock immediate value. In industries such as customer service or supply chain management, this may mean integrating AI into existing platforms to streamline workflow, augment human decision-making, or improve forecasting accuracy. These steps may not lead to radical transformation overnight, but they work to build capacity, confidence, and momentum. Most importantly, the practice of using AI creates learning to apply elsewhere.
  • Make “minimum viable moves.” Applied to AI, this means turning big challenges into accessible experiences. Instead of trying to implement AI across the entire supply chain, a company might start using machine learning to optimize inventory for just one production line. Instead of trying to automate all customer interactions, a team can pilot a chatbot for a specific service category and evaluate its effectiveness. Even at an operational level, an organization may pilot an AI forecasting tool in one region before scaling it company-wide.
  • Don’t wait for the next iteration of the model. Efforts to implement AI often get bogged down in discussions about how long it will take to achieve artificial general intelligence (AGI) or what the next set of models will bring. Although it’s helpful to have an idea of ​​what’s to come, you’re always better prepared for the future by practicing with the tools that exist today rather than waiting for the next versions to be better. Today’s moves rarely get in the way of future adaptations. Organizations can build robust practices for machine learning processes, model interpretive standards, and ethical checklists for AI that can evolve alongside the technology.
  • Design a system that promotes continuous progress. Teams working with AI must simultaneously feel that it is not optional to work with the technology in some way, while also not feeling paralyzed by the need to be perfect. Incentives should reward adoption, not specifically punish “failure.” In fact, we’d rather never use the term “fail fast” again. No human being likes failure; Incentives should reward teams that use technology and learn. Standards and expectations should be continually raised over time as the organization learns.

These examples share a common thread: they don’t wait until the technology stabilizes or the solution becomes obvious. They build progress through smaller, clear wins that strengthen trust and accelerate adoption. They all depend on a management system aimed at achieving a targeted behavioral result.

If you want people to embrace AI, you have to change the systems that guide them. These moves will not last unless you adjust your company’s steps Management systems-Formal and informal rules that govern the organization. We call management systems the “nervous system” of organizations because they are the things that drive change or, more often, prevent people from changing.

Here are some ways management systems can be transformed to attract AI efforts.

  • Decision rights: It may be necessary to have some degree of central control over the body of AI testing an organization conducts. Taking a “let a thousand flowers bloom” approach by decentralizing testing can make it difficult to share and accelerate initial pilot learning, forcing each part of the organization to create its own journey.
  • Performance evaluation: Adding the adoption of artificial intelligence to the goals; Just be careful what is being measured – if this is a successful test early on, it may inadvertently put the ruler into a state of ambition.
  • Budgets: Leadership can allocate some flexible funds that allow teams to test and scale AI ideas quickly, rather than tying them to multi-year capital projects.
  • Meeting criteria: We’ve seen some teams embrace an “AI Moment” in regular meetings where team members share what they’ve learned. This normalizes the experience and makes AI part of the culture, rather than a separate campaign.

When organizations continually modify these systems, they are integrating AI into their daily decision-making process. The result can be a culture that regains its edge daily, rather than one that stagnates until a major shift is forced.

The lesson is simple: Don’t wait for perfect information or universal acceptance. Leaders must treat AI as a tool for experimentation, such as testing small-scale applications, carefully monitoring results, and constantly adjusting. Refinement can keep AI aligned with an organization’s core purpose by enforcing continuous feedback, evaluation, and correction. And if adopting AI can work, just imagine how many other challenges facing the modern organization can be addressed through fine-tuning as well.

Stop planning a trip to the moon. Start sharpening.

Share this content:

Post Comment