Broad Research Interests: Automated Reasoning, Artificial Intelligence, Formal Methods
- Explainability through verifiable chains of inference
- Defeasible reasoning under uncertainty
- Reasoning about agents and their cognitive states
Integrated Planning and Reinforcement Learning
Working with Junkyu Lee, Michael Katz, and Shirin Sohrabi on extending and relaxing assumptions within their existing Planning Annotated Reinforcement Learning Framework developed at IBM Research.
In this framework, automated planning is used on a higher-level version of the overall problem with a surjective function mapping RL states to AP states. The agent is based on the options framework in Hiearchical Reinforcement Learning where options are defined as the grounded actions in the planning model.
More to come…
Symbolic Methods for Cryptography
Working with Dr. Andrew Marshall and others in applying term reasoning within computational logic towards cryptography. This collaboration was previously funded under an ONR grant. We are interested in applying techniques such as unification and term rewriting to the following areas:
- Block Ciphers
- Secure Multi-party Computation
- Commitment Schemes
Together we built CryptoSolve, a symbolic cryptographic analysis tool, and made it publically available on GitHub. I wrote the term algebra and rewrite libraries, and contributed to the mode of operation library and some unification algorithms. I still help maintain the codebase, as well as contribute to our current work on Garbled Circuits. We previously presented our work at UNIF 2020 (slides), FROCOS 2021 (slides), and WRLA 2022 (slides).
I’ve written a few notes about term reasoning.
- NRL: Catherine Meadows
- UMW: Andrew Marshall
- UT Dallas: Serdar Erbatur
- SUNY Albany: Paliath Narendran, Kim Cornell
- Clarkson University: Christopher Lynch
Group Website: https://cryptosolvers.github.io
Deep Reinforcement Learning: With Dr. Ron Zacharski I focused on how to make deep reinforcement learning algorithms more sample efficient. That is, how can we make it so that the RL agent learns more from every observation to make it so that we achieve our goal faster. With that goal in mind, I built out a Reinforcement Learning library written in PyTorch to help benchmark my ideas.
|RL Library on Github||Interactive Demonstrations Library||Undergraduate Honors Thesis (Eagle Scholar Entry)|
|Undergraduate Honors Defense||QEP Algorithm Slides||More…|
Reinforcement Learning: Studied the fundamentals of reinforcement learning with Dr. Stephen Davies. We went over the fundamentals such as value functions, policy functions, how we can describe our environment as a markov decision processes, etc.
Before this study, I worked through a great book called “Build your own Lisp”.
Competitive Programming: Studying algorithms and data structures necessary for competitive programming. Attended ACM ICPC in November 2018/2019 with a team of two other students.
Cluster Analysis: The study of grouping similar observations without any prior knowledge. I studied this topic by deep diving Wikipedia articles under the guidance of Dr. Melody Denhere during Spring 2018. Extensive notes
Excitation of Rb87: Worked in a Quantum Research lab alongside fellow student Hannah Killian under the guidance of Dr. Hai Nguyen. I provided software tools and assisted in understanding the mathematics behind the phenomena.
Beowulf Cluster: In order to circumvent the frustrations I had with simulation code taking a while, I applied and received funding to build out a Beowulf cluster for the Physics department. Dr. Maia Magrakvilidze was the advisor for this project. LUNA-C Poster