Brandon Rozek

Photo of Brandon Rozek

PhD Student @ RPI studying Automated Reasoning in AI and Linux Enthusiast.

Quick List of Publications

Broad Research Interests: Automated Reasoning, Artificial Intelligence, Formal Methods

Logic-Based AI

Working with Dr. Selmer Bringsjord and others in the RAIR Lab to design and implement artificial intelligent agents using computational logic. I’m particularly interested in:

Notes on Automated Theorem Proving

Integrated Planning and Reinforcement Learning

Working with Junkyu Lee, Michael Katz, and Shirin Sohrabi on extending and relaxing assumptions within their existing Planning Annotated Reinforcement Learning Framework developed at IBM Research.

In this framework, automated planning is used on a higher-level version of the overall problem with a surjective function mapping RL states to AP states. The agent is based on the options framework in Hiearchical Reinforcement Learning where options are defined as the grounded actions in the planning model.

More to come…

Symbolic Methods for Cryptography

Working with Dr. Andrew Marshall and others in applying term reasoning within computational logic towards cryptography. This collaboration was previously funded under an ONR grant. We are interested in applying techniques such as unification and term rewriting to the following areas:

Together we built CryptoSolve, a symbolic cryptographic analysis tool, and made it publically available on GitHub. I wrote the term algebra and rewrite libraries, and contributed to the mode of operation library and some unification algorithms. I still help maintain the codebase, as well as contribute to our current work on Garbled Circuits. We previously presented our work at UNIF 2020 (slides), FROCOS 2021 (slides), and WRLA 2022 (slides).

I’ve written a few notes about term reasoning.

Current Collaborators:

Group Website: https://cryptosolvers.github.io

Reinforcement Learning

Deep Reinforcement Learning: With Dr. Ron Zacharski I focused on how to make deep reinforcement learning algorithms more sample efficient. That is, how can we make it so that the RL agent learns more from every observation to make it so that we achieve our goal faster. With that goal in mind, I built out a Reinforcement Learning library written in PyTorch to help benchmark my ideas.


RL Library on Github Interactive Demonstrations Library Undergraduate Honors Thesis (Eagle Scholar Entry)
Undergraduate Honors Defense QEP Algorithm Slides More…

Reinforcement Learning: Studied the fundamentals of reinforcement learning with Dr. Stephen Davies. We went over the fundamentals such as value functions, policy functions, how we can describe our environment as a markov decision processes, etc.

Notes and Other Goodies / Github Code


Programming Languages: Back in the Fall of 2018, under the guidance of Ian Finlayson, I worked towards creating a programming language similar to SLOTH (Simple Language of Tiny Heft). SLOTH Code

Before this study, I worked through a great book called “Build your own Lisp”.

Competitive Programming: Studying algorithms and data structures necessary for competitive programming. Attended ACM ICPC in November 2018/2019 with a team of two other students.

Cluster Analysis: The study of grouping similar observations without any prior knowledge. I studied this topic by deep diving Wikipedia articles under the guidance of Dr. Melody Denhere during Spring 2018. Extensive notes

Excitation of Rb87: Worked in a Quantum Research lab alongside fellow student Hannah Killian under the guidance of Dr. Hai Nguyen. I provided software tools and assisted in understanding the mathematics behind the phenomena.

Modeling Population Dynamics of Incoherent and Coherent Excitation

Coherent Control of Atomic Population Using the Genetic Algorithm

Beowulf Cluster: In order to circumvent the frustrations I had with simulation code taking a while, I applied and received funding to build out a Beowulf cluster for the Physics department. Dr. Maia Magrakvilidze was the advisor for this project. LUNA-C Poster