Andrew Luxem
Andrew's Playbooks

Applied AI, Without the Hype: Notes from the Applied AI for Entrepreneurs Summit

Andrew Luxem
#ai#applied-ai#leadership#decision-making#systems-thinking#human-in-the-loop#responsible-ai#entrepreneurship
Applied AI for Entrepreneurs Summit at the University of Utah bringing together founders, operators, and researchers

Applied AI, Without the Hype: Notes from the Applied AI for Entrepreneurs Summit

Yesterday I spent the day at the Applied AI for Entrepreneurs Summit at the University of Utah.

It was one of the most grounded, useful AI events I’ve attended in a long time.

Not because of flashy demos.
Not because of bold predictions.

But because nearly every speaker came back to the same uncomfortable truth:

AI doesn’t fail because the models aren’t powerful enough.
It fails because we design bad systems around them.

The Framing That Changed the Day

The summit opened with a simple but critical distinction:

Knowledge is now abundant.
Intelligence is the ability to reason, decide, and act under constraints.

That framing immediately reframed the AI conversation.

The question isn’t:

What can AI do?

It’s:

Where should AI sit inside a system—and who is accountable for the outcome?

Once you ask that, a lot of hype collapses on contact.

Human-in-the-Loop Is Not a Compromise

Across keynotes, panels, and hallway conversations, there was broad alignment on one point:

Keeping humans in the loop is not a weakness.
It’s the advantage.

Human judgment still matters most when:

  • Risk is high
  • Trust is fragile
  • Outcomes are irreversible
  • Ethics and governance are involved

Bad AI practices don’t just fail quietly.
They create business debt, legal exposure, and reputational damage that compounds over time.

The teams doing this well weren’t chasing autonomy.
They were designing systems where AI supports decisions, not replaces them.

Applied AI Starts With Friction, Not Tools

One of the most repeated ideas of the day was refreshingly simple:

“We didn’t look for AI. We looked for friction.”

The strongest examples shared all followed the same pattern:

  • Start with one real problem
  • Map the workflow first (boxes and arrows)
  • Insert AI where it reduces friction
  • Keep humans where judgment matters
  • Validate quickly with real users

AI compressed timelines, from months to weeks, sometimes days, but it didn’t eliminate the need for clarity.

If you don’t know what problem you’re solving or who you’re solving it for, AI just helps you get lost faster.

The Scalability Cliff Most Teams Hit

Several speakers shared a hard-earned reality:

AI works at 10 users.
Struggles at 600.
Breaks at 40,000.

The failure point isn’t the model.
It’s the system.

Teams that successfully crossed that gap focused on:

  • Clear ownership between humans and machines
  • Structured outputs (often using JSON as a shared language)
  • Modular prompting instead of giant monolith prompts
  • Multi-model verification instead of blind trust
  • Production rigor: docs, QA, validation, and monitoring

The breakthrough wasn’t magic.

It was engineering discipline.

Messaging, Decisioning, and Commercial Reality

Sessions focused on messaging and customer decisioning grounded AI in commercial outcomes.

A few truths surfaced quickly:

  • Word choice and frequency matter more than volume
  • The best salespeople ask the most questions
  • Quality beats quantity, every time
  • AI works best as a decision-support layer for humans who still own the relationship

This wasn’t about replacing creativity.
It was about reducing guesswork before money gets spent.

Responsible AI Is Becoming Operational

One of the most encouraging parts of the summit was seeing responsible AI move from theory into practice.

Utah’s Responsible AI Community Consortium released the AI Leadership Blueprint, a practical, step-by-step guide designed to help organizations adopt and manage AI responsibly—before it becomes a liability.

👉 Read the AI Leadership Blueprint here:
https://rai.utah.edu/ai-blueprint/

What makes the blueprint stand out is that it’s not theoretical. It’s explicitly designed to meet organizations where they are.

Key highlights:

  • A practical roadmap for integrating generative AI responsibly
  • Clear guidance on governance, risk management, and policy development
  • Workforce training strategies that emphasize judgment - not just tools
  • ROI and sustainability frameworks to ensure AI creates long-term value

As Penny Atkins, Ph.D., noted during the summit, the blueprint is the result of deep, cross-sector work spanning workforce development, infrastructure, and policy-turning responsible AI into a shared, actionable framework rather than an abstract ideal.

The blueprint addresses the reality many leaders are facing right now:

  • AI adoption is happening faster than governance
  • Employees are already using AI, often without authorization
  • Organizations need structure, clarity, and accountability—not more experimentation

The scope of the blueprint spans the full AI lifecycle:

  • Foundation & Governance: leadership, policy, and risk
  • Readiness & Assessment: people, tools, and technology
  • Implementation & Training: change management and enablement
  • Value & Sustainability: ROI measurement and long-term planning

This work reflects a broader shift: responsible AI is no longer optional, and it’s no longer academic. It’s operational.

The Through-Line

Across every talk, panel, and side conversation, one idea kept resurfacing:

AI is a capable servant and a terrible master.

Applied AI isn’t about replacing people.
It’s about upgrading how decisions get made.

That requires:

  • Clear ownership
  • Human judgment
  • Measurable value
  • Systems designed for trust, not demos

Want to Attend Similar Events?

If you’re based in Salt Lake City, Utah (or nearby) and want to attend future events like this, I highly recommend keeping an eye on the Lassonde Entrepreneur Institute’s event calendar.

They consistently host thoughtful, practitioner-friendly sessions that bring together founders, operators, researchers, and students around real-world problems, not hype.

👉 View upcoming events here:
https://lassonde.utah.edu/calendar/

Huge thanks to the organizers, facilitators, and speakers for creating a space where these conversations can happen.

This is the kind of work, and community, that actually moves applied AI forward.

Applied AI for Entrepreneurs Summit at the University of Utah bringing together founders, operators, and researchers

Enjoyed this post?

CONNECT