AI Trustworthiness and Risk Assessment for Challenged Contexts (ATRACC)

Nov 7, 2024·
Brian Jalaian, Ph.D.
Brian Jalaian, Ph.D.
· 1 min read
Image credit:
Date
Nov 7, 2024 9:00 AM — Nov 9, 2024 5:00 PM
Event
Location

Westin Arlington Gateway

801 N Glebe Rd, Arlington, VA 22203

About The ATRACC Symposium Session

Artificial intelligence (AI) has become a transformative technology with revolutionary impact across various domains, including challenging contexts such as civil infrastructure, healthcare, and military defense. The ATRACC Symposium addresses the critical need for assessing AI trustworthiness and risk in these challenged contexts.

As the founder of this event, I have formed an international community around this subject. I am hopeful that we can continue this effort annually and establish an international collaboration around this emerging topic.

Key Topics

  • Assessment of non-functional requirements (explainability, transparency, accountability, privacy)
  • Methods for system reliability, uncertainty quantification, and over-generalizability balance
  • Verification and validation (V&V) of AI systems
  • Enhancing reasoning in Large Language and Foundational Models (LLFMs)
  • Links between performance, trustworthiness, and trust
  • Architectures for Mixture-Of-Experts (MoE) and multi-agent systems
  • Evaluation of AI systems vulnerabilities, risks, and impact

Important Dates

  • Symposium Dates: November 7-9, 2024
  • Registration Fee Increase: October 4, 2024
  • Hotel Room Block Deadline: October 17, 2024

Registration

Early registration is recommended as fees will increase on October 4th. Hotel rooms should be booked as soon as possible due to limited availability.

For more information on topics and submission guidelines, please visit our Call for Papers page.

Brian Jalaian, Ph.D.
Authors
Associate Professor
Dr. Brian Jalaian is an Associate Professor at the University of West Florida and a Research Scientist at IHMC, where he leads cutting-edge work at the intersection of machine learning, AI assurance, and systems optimization. His research spans large language models (LLMs), AI model compression for edge deployment, uncertainty quantification, agentic and neurosymbolic AI, and trustworthy AI in medicine and defense. Formerly a senior AI scientist at the U.S. Army Research Lab and the DoD’s JAIC, Brian has shaped national efforts in robust, resilient, and testable AI. He’s passionate about building intelligent systems that are not only powerful—but provably reliable. When he’s not optimizing AI at scale, he’s mentoring the next generation of ML engineers or pushing the boundaries of agentic reasoning.