AI is already showing up in foundation work, sometimes formally and sometimes quietly through everyday tools. The real risk is not adoption; it is ungoverned adoption. AI only performs as well as the data and controls behind it. When those foundations are strong, AI can elevate analysis, improve decision consistency and free teams to focus more energy on advancing the mission.
In this webinar, we share a practical framework for governing data first, adding lightweight AI guardrails and reinforcing both with a focused set of cybersecurity fundamentals. We concluded with simple prompt techniques that help teams produce clearer, more repeatable results.
Watch webinar replay here.
Here are some key takeaways:
- Governance, not adoption, is the real AI risk:
AI is already embedded in foundation workflows through everyday tools. Without clear governance, policies and oversight, organizations risk inconsistent decision-making, data exposure and undocumented AI influence on grant outcomes. - Data quality drives AI reliability:
AI operates downstream of your data. Fragmented systems, inconsistent taxonomy and poor data hygiene lead to unreliable insights, false comparability and overconfident outputs, making data governance essential for effective AI use. - AI delivers the most value before final decisions:
The strongest use cases sit in research, analysis and decision preparation, including Letters of Inquiry (LOI) triage, mission alignment and memo drafting. Final funding decisions, compliance judgments and board-level conclusions should remain human-led. - Risk-based guardrails enable responsible AI use:
Not all use cases require the same level of oversight. Administrative tasks can be accelerated with standard review, while judgment-adjacent work requires structured validation, clear data sources and documented human accountability. - Structured prompting improves consistency and trust:
Clear, well-defined prompts that limit inputs, surface uncertainty and separate facts from assumptions produce more reliable and reviewable outputs. This helps reduce hallucinations and prevents AI from implying judgment where it should not.
Featured Speaker:
Thomas J. DeMayo, CISSP, CISA, CIPP/US, CRISC, CEH, CHFI, CCFE
Partner, Cybersecurity and Privacy Advisory
Hosts:
Scott Brown, CPA, Partner
Michael Koenecke, CPA, Partner

