I helped design rocket engines for NASA’s space shuttles. That’s why companies need AI that’s as trustworthy as aviation technology

I helped design rocket engines for NASA’s space shuttles. That’s why companies need AI that’s as trustworthy as aviation technology

adam-markowitz I helped design rocket engines for NASA's space shuttles. That's why companies need AI that's as trustworthy as aviation technology

When I was an aerospace engineer working on NASA’s Space Shuttle program, trust was critical. Every screw, every line of code, and every system had to be carefully validated and tested, or the shuttle would never leave the launch pad. After their mission, the astronauts would walk through the office and thank thousands of engineers for getting them home safely to their families – that’s how deep the trust and safety in our systems runs.

Despite the “move fast and break things” rhetoric, technology should be no different. New technologies need to build trust before they can accelerate growth.

By 2027, about 50% Organizations are expected to deploy AI agents, and Mackenzie The report predicts that by 2030, up to 30% of all work could be performed by AI agents. Many cybersecurity leaders I speak with are looking to bring AI in as quickly as possible to enable businesses, but they also realize they need to ensure these integrations are done safely and securely with the right guardrails in place.

For AI to deliver on its promise, business leaders need to trust AI. It won’t happen on its own. Security leaders must learn a lesson from aerospace engineering and build confidence in their operations from day one, or risk missing out on the business growth it accelerates.

The relationship between trust and growth is not theoretical. I lived it.

Establishing a business based on trust

After NASA’s Space Shuttle program ended, I founded my first company: a platform for professionals and students to showcase and share evidence of their skills and competencies. It was a simple idea, but it required our customers to trust us. We quickly discovered that universities would not work with us until we demonstrated our ability to handle sensitive student data securely. This means providing assurance through a number of different avenues, including demonstrating clean SOC 2 certification, answering lengthy security questionnaires, and completing various compliance certifications through painstaking manual processes.

That experience shaped the founding of Drata, where my partners and I set out to build a layer of trust between major companies. By helping GRC leaders and their companies gain and demonstrate their security posture to customers, partners, and auditors, we remove friction and accelerate growth. Our rapid trajectory from $1M to $100M in annual recurring revenue in just a few years is proof that companies are realizing the value, and are slowly starting to shift from viewing their GRC teams as a cost center to a business enabler. This translates into real, tangible results – we’ve seen $18 billion in security-impacted revenue with security teams using our SafeBase Trust Center.

Now, with artificial intelligence, the risks are even higher.

Existing compliance frameworks and regulations – such as SOC 2, ISO 27001, and GDPR – are designed for data privacy and security, not for AI systems that generate text, make decisions, or operate autonomously.

Thanks to legislation such as Newly enacted AI safety standards in Californiaregulators are slowly starting to catch up. But waiting for new rules and regulations is not enough – especially since companies rely on new AI technologies to stay ahead.

You will not launch an untested missile

In many ways, this moment reminds me of the work I did at NASA. As an aeronautical engineer, I have never done “production testing.” Each shuttle mission was a carefully planned operation.

Deploying AI without understanding and acknowledging the risks is like launching an untested missile: the damage can be immediate and end in catastrophic failure. Just as a failed space mission can diminish people’s trust in NASA, the mistaken use of artificial intelligence, without fully understanding the risks or implementing guardrails, can diminish consumers’ trust in that organization.

What we need now is a new trust operating system. To activate trust, leaders must create a program that:

  1. transparent. In aerospace engineering, comprehensive documentation is not a bureaucracy, but a force for accountability. The same applies to artificial intelligence and trust. There must be traceability – from policy to control to evidence to testimony.
  2. continuous. Just as NASA continuously monitors its missions around the clock, companies must invest in trust as an ongoing, ongoing process rather than a timely check box. Controls, for example, need constant monitoring so that audit readiness becomes a state of being, not a last-minute sprint.
  3. autonomy. Today’s rocket engines can manage their own operations through integrated computers, sensors and control loops, without pilots or ground crew adjusting valves directly mid-flight. As AI becomes a more prevalent part of everyday business, this should also apply to our trust programs. If humans, agents, and automated workflows are to transact, they must be able to validate trust themselves, deterministically, and without ambiguity.

When I think back to my days in space, what stands out is not only the complexity of space missions, but also their interconnectedness. Tens of thousands of components, built by different teams, must work together perfectly. Each team trusts that the others are doing their work effectively, and decisions are documented to ensure transparency throughout the organization. In other words, trust was the layer that held the entire space shuttle program together.

The same is true for AI today, especially as we enter this emerging era of agentive AI. We are shifting to a new way of working, where hundreds – and one day thousands – of agents, humans and systems are constantly interacting with each other, generating tens of thousands of touchpoints. The tools are powerful and the opportunities are enormous, but only if we are able to earn and maintain trust in every interaction. Companies that create a culture of transparent, continuous, and independent trust will drive the next wave of innovation.

The future of artificial intelligence is already under construction. The question is simple: will you build it on trust?

The opinions expressed in Fortune.com reviews are solely those of their authors and do not necessarily reflect the opinions or beliefs luck.

Global Luck Forum It returns from October 26-27, 2025 in Riyadh. CEOs and global leaders will come together at a dynamic, invitation-only event to shape the future of business. Apply for an invitation.

Share this content:

Post Comment