This workshop explores using programmatic representations (e.g., code, symbolic programs, rules) to enhance agent learning and address key challenges in reinforcement learning (RL). By leveraging structured representations, we aim to improve interpretability, generalization, efficiency, and safety in deep RL, moving beyond the limitations of “black box” deep learning models. The workshop brings together researchers in RL and program synthesis/code generation to discuss using programs as policies (e.g., LEAPS, Code as Policies, HPRL, RoboTool, Carvalho et al. 2024), reward functions (e.g., Eureka, Language2Reward, Text2Reward), skill libraries (e.g., Voyager), task generators (e.g., GenSim), or environment models (e.g., WorldCoder, Code World Models). This paradigm bridging RL and programmatic representations enables human-understandable reasoning, reduces reliance on massive data-driven models, and promotes modularity, fostering progress toward verifiable and robust agents across virtual and real-world applications.
Time | Event |
---|---|
9:00--9:10 | Opening remarks |
9:10--9:40 | Invited talk 1: Kevin Ellis |
9:40--10:10 | Invited talk 2: Martha White |
10:10--10:45 | Oral talks |
10:45--11:00 | Coffee break |
11:00--11:30 | Invited talk 3: Amy Zhang |
11:30--12:00 | Invited talk 4: Sheila McIlraith |
12:00--13:00 | Lunch |
13:00--14:00 | Poster session 1 |
14:00--14:30 | Invited talk 5: Yuandong Tian |
14:30--15:00 | Invited talk 6: Jacky Liang |
15:00--16:00 | Poster session 2 |
16:00--16:15 | Coffee break |
16:15--17:00 | Panel discussion |
All times are in Pacific Time (PT).
We invite the submission of research papers and position papers on the topic of programmatic representations for reinforcement learning and sequential decision-making. This workshop aims to bring together researchers from reinforcement learning, imitation learning, planning, search, and optimal control with experts in program synthesis and code generation to explore the use of programmatic structures to enhance agent learning.
Topics of Interest
Topics of interest include, but are not limited to:
Submission Format: We accept submissions up to 9 pages either ICML (RLC) or NeurIPS format.
Important Dates
Submission Process
All submissions will be managed through OpenReview with a double-blind review process. Accepted papers will be presented during poster sessions, with exceptional submissions selected for oral presentations.
Please submit your paper to the Open Review page.
Please incorporate reviewers’ feedbacks and prepare for your camera-ready submission. Please submit your camera-ready version on OpenReview. Your camera-ready submission should be de-anonymized, and include at most 9 pages (excluding the references and appendices). The paper can be in RLC or NeurIPS formats, with headnote/footnote “RLC 2025 Workshop on Programmatic Reinforcement Learning”.
Camera-Ready LaTeX Templates:
The camera-ready deadline is July 25, 2025, Anywhere on Earth (AoE).
Reza Abdollahzadeh, Parnian Behdin, Kiarash Aghakasiri, Levi Lelis
Amirhossein Rajabpour, Kiarash Aghakasiri, Sandra Zilles, Levi Lelis
Dillon Ze Chen, Johannes Zenn, Tristan Cinquin, Sheila A. McIlraith
Zihan Ye, Oleg Arenz, Kristian Kersting
Pierriccardo Olivieri, Fausto Lasca, Alessandro Gianola, Matteo Papini
Hector Kohler, Waris Radji, Quentin Delfosse, Riad Akrour, Philippe Preux