The Role of Quality Data in Driving Reliable AI Outcomes
AFCEA TechNet Cyber 2025 – Baltimore Convention Center – May 6th, 2025 – 4-5 PM EST
Panel Description: As organizations increasingly leverage artificial intelligence to transform industries, the quality and security of the underlying data have become critical. This panel will explore the dual challenges of ensuring data integrity and protecting sensitive information, while emphasizing how robust data governance directly impacts the reliability and ethical use of AI. Experts will discuss strategies for safeguarding data pipelines, mitigating risks of biased or incomplete data, and navigating regulatory landscapes in a data-driven AI ecosystem.
Key Discussion Points:
The Nexus of Data Security and Quality: How secure and high-quality data underpins effective AI applications and reduces risks of inaccuracies or harmful outcomes.
Data Integrity in AI Development: Ensuring datasets are complete, unbiased, and protected from tampering throughout the AI lifecycle.
Emerging Threats to Data Security in AI: Exploring vulnerabilities in AI data pipelines, including adversarial attacks, data poisoning, and theft.
Best Practices for Data Governance in AI Projects: Real-world examples of how organizations manage data quality, security, and privacy to enable successful AI initiatives.
Future-Proofing AI with Secure and Ethical Data Practices: Innovations and tools for improving data security while scaling AI applications responsibly.
Panelists:

Shivaji Sengupta – CEO – NXTKey Corporation

Darek J. Kitlinski – Air Force A1 CTO – United States Air Force

CAPT Daniel Rogers, USCG – Deputy Chief Data and
Artificial Intelligence Officer – U.S. Coast Guard

David W. Carroll – Vice President Cyber Capability,
Engineering and Strategy – GDIT

Bob Scharmann – Vice President, Cyber Accelerator – Leidos
Questions addressed by panelists include:
- The Nexus of Data Security and Quality
How do data provenance and traceability contribute to both security and quality assurance in AI pipelines?
In federated or distributed learning environments, how can organizations ensure consistent data quality and enforce security policies across nodes?
How do you prioritize data security versus data quality when developing AI systems, and are there trade-offs you’ve encountered in real-world projects?
Can you share an example from your organization where high-quality, secure data directly led to a successful AI outcome—or where poor data caused a failure?
From a government perspective, how does the public sector ensure that data security protocols align with the need for high-quality data in AI applications?
- Data Integrity in AI Development
What specific steps does your organization take to ensure datasets are unbiased and complete during the AI development lifecycle?
How do you detect and mitigate data tampering or degradation in large-scale AI projects, especially when datasets come from multiple sources?
For contractors working with government agencies, what unique challenges arise when ensuring data integrity across public-private partnerships?
What are the best practices for labeling data securely and accurately, especially in sensitive domains like healthcare or defense?
How do you build AI systems that are resilient to corrupted or manipulated training data?
How do compliance standards (like NIST SP 800-53 or ISO/IEC 27001) influence your approach to maintaining data integrity?
- Emerging Threats to Data Security in AI
What emerging threats—like adversarial attacks or data poisoning—are you seeing in AI data pipelines, and how are you addressing them?
How significant is the risk of data theft in AI systems, and what measures are proving most effective in preventing it?
From a regulatory standpoint, how is the government preparing to counter these threats, and what role do contractors play in supporting those efforts?
What insider threats or internal process failures have you seen compromise AI data security?
What role does synthetic data play in mitigating data poisoning risks, and does it introduce any new vulnerabilities?
Are current cybersecurity frameworks adequately addressing AI-specific data threats, or are new models needed?
- Best Practices for Data Governance in AI Projects
Can you share a specific example of a data governance framework that has worked well in an AI project, and what made it successful?
How do you balance privacy requirements, such as those in federal regulations, with the need for accessible, high-quality data in AI development?
What lessons have you learned from past AI projects about integrating security and quality into data governance from the outset?
How do you operationalize data governance in cross-functional AI teams involving data scientists, IT security, and legal compliance?
How do you manage and update consent for AI data usage in dynamic, real-time data environments?
- Future-Proofing AI with Secure and Ethical Data Practices
What emerging tools or innovations are you most excited about for improving data security and quality in AI systems over the next decade?
How can organizations scale AI responsibly while maintaining ethical data practices, especially under pressure to deliver results quickly?
What role do you see for public-private collaboration in developing standards for secure and ethical data practices in AI?
How can organizations measure the success of secure and ethical data practices in real-world AI deployments?
What role should explainability (XAI) play in validating the integrity of AI outputs based on input data quality?
How can organizations prepare for quantum-era threats to data security in AI systems?