An example preprint / working paper

Image credit:
Abstract
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis posuere tellus ac convallis placerat. Proin tincidunt magna sed ex sollicitudin condimentum. Sed ac faucibus dolor, scelerisque sollicitudin nisi. Cras purus urna, suscipit quis sapien eu, pulvinar tempor diam. Quisque risus orci, mollis id ante sit amet, gravida egestas nisl. Sed ac tempus magna. Proin in dui enim. Donec condimentum, sem id dapibus fringilla, tellus enim condimentum arcu, nec volutpat est felis vel metus. Vestibulum sit amet erat at nulla eleifend gravida.
Type

This work is driven by the results in my previous paper on LLMs.

Create your slides in Markdown - click the Slides button to check out the example.

Add the publication’s full text or supplementary notes here. You can use rich formatting such as including code, math, and images.

Brian Jalaian, Ph.D.
Authors
Associate Professor
Dr. Brian Jalaian is an Associate Professor at the University of West Florida and a Research Scientist at IHMC, where he leads cutting-edge work at the intersection of machine learning, AI assurance, and systems optimization. His research spans large language models (LLMs), AI model compression for edge deployment, uncertainty quantification, agentic and neurosymbolic AI, and trustworthy AI in medicine and defense. Formerly a senior AI scientist at the U.S. Army Research Lab and the DoD’s JAIC, Brian has shaped national efforts in robust, resilient, and testable AI. He’s passionate about building intelligent systems that are not only powerful—but provably reliable. When he’s not optimizing AI at scale, he’s mentoring the next generation of ML engineers or pushing the boundaries of agentic reasoning.