Greetings
I am a final-year PhD student at the University of Illinois, Urbana-Champaign (UIUC), where I am advised by the eminent Nan Jiang. My current research interests are in deriving principled methods for reinforcement learning. In particular: when/how we can use function approximation to derive sound/efficient/useful decision-making algorithms?
During my PhD, I’ve had the immense delight of interning with Dylan Foster and Akshay Krishnamurty at Microsoft Research, Dean P. Foster at Amazon Research, and Csaba Szepesvári at the University of Alberta.
In a previous life, I completed an MSc in Computer Science and a BSc in Maths & Physics, both from McGill University. My MSc research was advised by the dynamic duo of Prakash Panangaden and Marc G. Bellemare.
Here are links to my Google Scholar and my CV. I can be reached at philipa4 at illinois dot edu (or, in the event of overeager spam filters, at amortilaphilip at gmail dot com).
Education
2019 — 2025. PhD in Computer Science @ University of Illinois, Urbana-Champaign.
Advised by Nan Jiang.
Thesis: Structure and Representation in Statistical Reinforcement Learning. [proposal]
Committee: Nan Jiang, Csaba Szepesvári, Maxim Raginsky, Arindam Banerjee.
2017 — 2019. MSc in Computer Science @ McGill University.
2013 — 2017. BSc in Honours Maths & Physics @ McGill University.
Distinctions: First Class Honours, Principal’s Student-Athlete Honour Roll.
Conference Papers
Reinforcement Learning under Latent Dynamics: Toward Statistical and Algorithmic Modularity
Philip Amortila, Dylan J. Foster, Nan Jiang, Akshay Krishnamurthy, Zakaria Mhammedi
Mitigating Covariate Shift in Misspecified Regression with Applications to Reinforcement Learning
Philip Amortila, Tongyi Cao, Akshay Krishnamurthy
COLT 2024 [arXiv]
Scalable Online Exploration via Coverability
Philip Amortila, Dylan J. Foster, Akshay Krishnamurthy
Harnessing Density Ratios for Online Reinforcement Learning
Philip Amortila, Dylan J. Foster, Nan Jiang, Ayush Sekhari, Tengyang Xie
ICLR 2024 Spotlight [arXiv]
The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation
Philip Amortila, Nan Jiang, Csaba Szepesvari
ICML 2023 [arXiv]
A Few Expert Queries Suffices for Sample-Efficient RL with Resets and Linear Value Approximation
Philip Amortila, Nan Jiang, Dhruv Madeka, Dean P. Foster
On Query-efficient Planning in MDPs under Linear Realizability of the Optimal State-value Function
Gellert Weisz, Philip Amortila, Barnabàs Janzer, Yasin Abbasi-Yadkori, Nan Jiang, Csaba Szepesvári
Exponential Lower Bounds for Planning in MDPs With Linearly-Realizable Optimal Action-Value Functions
Gellert Weisz, Philip Amortila, Csaba Szepesvári
Solving Constrained Markov Decision Processes via Backward Value Functions
Harsh Satija, Philip Amortila, Joelle Pineau
A Distributional Analysis of Sampling-Based Reinforcement Learning Algorithms
Philip Amortila, Doina Precup, Prakash Panangaden, Marc G. Bellemare
AISTATS 2020 [arXiv, talk] & NeurIPS 2019 Optimization in RL Workshop Spotlight [talk]
Learning Graph Weighted Models on Pictures
Philip Amortila, Guillaume Rabusseau
ICGI 2018 [arXiv]
Workshop Papers
Temporally Extended Metrics for Markov Decision Processes
Philip Amortila, Marc G. Bellemare, Prakash Panangaden, Doina Precup
AAAI 2019 Safety in AI Workshop Spotlight [pdf]
Technical Notes
A Variant of the Wang-Foster-Kakade Lower Bound for the Discounted Setting
Philip Amortila, Nan Jiang, Tengyang Xie [arXiv]
Publications
Experience
Summer 2023. Research Intern @ Microsoft Research (New England).
Advised by Dylan Foster and Akshay Krishnamurthy.
Topic: Representation Learning & Modular Approaches to Rich Observation RL.
Summer 2022. Research Intern @ Amazon Research (New York).
Advised by Dean P. Foster.
Topic: Coordination & Communication in Partially Observed Cooperative Games.
Fall 2021. Research Intern @ Amazon Research (New York).
Advised by Dean P. Foster.
Topic: Optimal Algorithms for Expert-Assisted RL With Linear Features.
Summer 2021. Visiting Researcher @ University of Alberta.
Advised by Csaba Szepesvári.
Topic: Optimal Methods for Off-policy Evaluation With Misspecification.
Summer 2020. Visiting Researcher @ University of Alberta.
Advised by Csaba Szepesvári.
Topic: Limits of Sample-Efficient Learning With Linear Features.
2022 & 2023. Finalist for Apple PhD Fellowship (2022) and Google PhD Fellowship (2023).
Nominated by UIUC for national competitions (3 selected amongst all UIUC students).
2021. Best Student Paper Award at ALT 2021.
2019. NSERC Postgraduate Doctoral Fellowship (PGS-D).
Awards
Reviewer
Journal of Machine Learning Research (JMLR). 2022, 2023
Transactions of Machine Learning Research (TMLR). 2022, 2023
International Conference on Machine Learning (ICML). 2020, 2021, 2022, 2023
Neural Information Processing Systems (NeurIPS). 2020, 2021, 2022
International Conference on Learning Representations (ICLR). 2023
Service
Teaching Assistant
Fall 2023. CS 542 Statistical Reinforcement Learning @ UIUC.
Spring 2023. CS 443 Reinforcement Learning @ UIUC.
Fall 2019. CS 498 Reinforcement Learning @ UIUC.
Fall 2018. CS 598 Foundations of Machine Learning @ McGill.
Winter 2018. CS 551 Applied Machine Learning @ McGill.
Fall 2017. CS 551 Applied Machine Learning @ McGill.
Winter 2017. CS 302 Functional Programming @ McGill.