RLC WORKSHOP
Workshop on Programmatic Reinforcement Learning
Aug 5, 2025
Edmonton, Canada

This workshop explores using programmatic representations (e.g., code, symbolic programs, rules) to enhance agent learning and address key challenges in reinforcement learning (RL). By leveraging structured representations, we aim to improve interpretability, generalization, efficiency, and safety in deep RL, moving beyond the limitations of “black box” deep learning models. The workshop brings together researchers in RL and program synthesis/code generation to discuss using programs as policies (e.g., LEAPS, Code as Policies, HPRL, RoboTool, Carvalho et al. 2024), reward functions (e.g., Eureka, Language2Reward, Text2Reward), skill libraries (e.g., Voyager), task generators (e.g., GenSim), or environment models (e.g., WorldCoder, Code World Models). This paradigm bridging RL and programmatic representations enables human-understandable reasoning, reduces reliance on massive data-driven models, and promotes modularity, fostering progress toward verifiable and robust agents across virtual and real-world applications.

Tentative Schedule

TimeEvent
9:00--9:10Opening remarks
9:10--9:40Invited talk 1
9:40--10:10Invited talk 2
10:10--10:45Oral talks
10:45--11:00Coffee break
11:00--11:30Invited talk 3
11:30--12:00Invited talk 4
12:00--13:00Lunch
13:00--14:00Poster session 1
14:00--14:30Invited talk 5
14:30--15:00Invited talk 6
15:00--16:00Poster session 2
16:00--16:15Coffee break
16:15--17:00Panel discussion

All times are in Pacific Time (PT).

Speakers

Organizers

Call For Papers

Open Review

We invite the submission of research papers and position papers on the topic of programmatic representations for reinforcement learning and sequential decision-making. This workshop aims to bring together researchers from reinforcement learning, imitation learning, planning, search, and optimal control with experts in program synthesis and code generation to explore the use of programmatic structures to enhance agent learning.

Topics of Interest

Topics of interest include, but are not limited to:

  • Programs as Policies: Representing decision-making logic through programmatic policies in Python or domain-specific languages to enhance generalization and interpretability.
  • Programs as Reward Functions: Synthesizing programs encoding reward functions for agent learning to alleviate the efforts of designing hand-crafted reward functions.
  • Programs as Skill Libraries: Representing acquired skills as programs, allowing for reusing and composing skills.
  • Programmatically Generating Tasks: Synthesizing programs that describe various tasks for agent and robot learning to increase the generalizability of learned policies.
  • Programs as Environment Models: Inferring executable codes to simulate environment dynamics and using them to get imagined data or plan as in model-based RL.

Submission Format: We accept submissions up to 8 pages in RLC format.

Important Dates

  • Submission Deadline: May 30, 2025, AoE
  • Author Notification: June 15, 2025, AoE
  • Camera-Ready Deadline: July 25, 2025, AoE
  • Workshop Date: August 5, 2025

Submission Process

All submissions will be managed through OpenReview with a double-blind review process. Accepted papers will be presented during poster sessions, with exceptional submissions selected for oral presentations.

Please submit your paper to the Open Review page.

This website is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License and is based on the Leela Interp project. That means you're free to borrow the source code of this website with attribution.