RPL Summer School 2024

Participants of the RPL Summer School 2024. Happy Tammsvik resort, Stockholm, Sweden

The division for Robotics, Perception and Learning (RPL) at KTH Royal Institute of Technology is organising the second edition of the RPL Summer School for doctoral and postdoctoral researchers, which is set to take place on 9-14 June 2024 in Stockholm, Sweden.

The summer school hosts a mix of academic and industrial researchers from around the world and aims to provide a forum that promotes new collaborations. Our school focuses on a diverse range of topics within robotics, perception, and learning, encompassing areas such as Machine Learning, Computer Vision, Robotics, Human-Robot Interaction, Planning, and Decision Making.

The summer school revolves around brainstorming and discussion sessions among the participants, intertwined with keynote speeches and invited talks. The aim of such a programme is for the participants to discover overlaps in their research topics, find novel ideas, and establish new points of collaboration with other researchers throughout the summer school. We hope that by the end of the event participants will have formed concrete leads for future works. A preliminary summary of the programme is as follows:

  • 6-day event from the 9th to 14th of June 2024
  • Keynote speeches and invited talks
  • Panel discussion
  • Student presentations and brainstorming sessions

The summer school will take place at Happy Tammsvik resort, where participants can enjoy nature, spa, outdoor activities, and more. RPL sponsors the expenses of the summer school participants, including accommodation, meals and refreshments from dinner on 9 June, to midday 14 June, as well as all the materials and activities in the course of the event.

speakers

Hadas Kress-Gazit

Cornell University

Jeannette Bohg

Stanford University

Marc Toussaint

TU Berlin

MayA CAKMAK

University of Washington

Nick Pawlowski

Microsoft Research Cambridge

Sam Devlin

Microsoft Research Cambridge

Umang Bhatt

New York University

Viktor Larsson

Lund University

Ashley Edwards

Research Scientist

Participants

(ordered alphabetically)

  • Adam Jelley: Aligning Agents like Large Language Models
  • Ahmet Ercan Tekden: Data-Efficient Representation Learning for Grasping and Manipulation
  • Ajinkya Khoche: Multi source fusion for long range 3D perception
  • Alberta Longhini: Manipulation of Cloth-like Deformable Objects
  • Albin Larsson Forsberg: Constrained Multi-agent reinforcement learning in telecommunications
  • Alejandro Sánchez Roncero: On the exploration of anti-drone drone strategies for public safety
  • Ameya Pore: Safe Reinforcement Learning for Robot-assisted Surgery
  • Aniol Civit Bertran: Multi-objective Learning For Personalising Robot Interactions in Assistive Tasks
  • Anna Gautier: Multi-agent planning when goals differ
  • Bengisu Cagiltay: Supporting Long-Term HRI Through Shared Family Routines
  • Birthe Nesset: Building, Breaking and Repairing Transactional Trust in Human-Robot Interactions
  • Carlo Bosio: Automated Layout Design and Control of Robust Cooperative Grasped-Load Aerial Transportation Systems
  • Chelsea Rose Sidrane: Safe and Reliable Data-Driven Autonomy
  • Christyan Mario Cruz Ulloa: Quadrupedal Robots in Search and Rescue: Perception and Teleoperation
  • Ci Li: 3D Shape and Pose Modeling for Horse Motion Capture
  • Claire Yang: Towards Socially Intelligent Robots: Adapting to Diverse Needs
  • Cornelius Braun: Discovering diverse skills via black-box optimization
  • David Caceres Dominguez: Human Interpretable Robot Learning from Demonstration
  • Divya Thuremella: Long-Tailed Trajectory Prediction
  • Doganay Sirintuna: Adaptive Approaches for Collaborative Mobile Manipulation
  • Elena Merlo: Exploiting Information Theory for Intuitive Robot Programming of Manual Activities
  • Ermanno Bartoli: Continual Learning for Human-Robot Interaction
  • Fereidoon Zangeneh: Visual Localisation in Ambiguous Scenes
  • Finn Lukas Busch: Planning and Navigation for Autonomous Robots using Foundation Models
  • Gregory LeMasurier: Proactively Explaining Robot Failures
  • Haofei Lu: Generating Dexterous Grasps for Unknown Objects
  • Holly Dinkel: KnotDLO: Toward Interpretable Knot Tying
  • Idil Ozdamar: Collaborative Loco-manipulation through Pulling and Pushing Actions
  • Irene Ballester Campos: Measuring dementia behaviours through depth sensors
  • Jane Pauline Ramirez: Biomorphic Drones for Multimodal Locomotion
  • Jens Lundell: Constrained Generative Grasp Sampling
  • Jonathan Astermark: Homography constraints for pose estimation in computer vision
  • Joris Verhagen: Robust Motion Planning with Signal Temporal Logic
  • Josip Josifovski: Continual Reinforcement Learning for Sim2Real Transfer in Robotics
  • José Pedro: Visual SLAM & Video Compression
  • Julian Nubert: Robust Robot Perception through (Joint) Optimization and Learning
  • Kartik Venkata Ramachandruni: Unobtrusive Adaptation to Organizational Preferences For Personalized Robot Assistance
  • Kate Candon: Leveraging Implicit Human Feedback to Better Learn from Explicit Human Feedback in Human-Robot Interactions
  • Katharina Friedl: Physics-Aware Learning of Dynamical Systems
  • Klas Wijk: Differentiable Feature Selection
  • Lama Alssum: Continual Learning for Image and Video Understanding
  • Lennart Wachowiak: When and What to Explain in Human–Robot Collaborations?
  • Louise Rixon Fuchs: Perception for autonomous underwater vehicles using deep learning
  • Ludvig Ericson: Indoor Mobile Robotics and Generative AI: Learning to Predict Indoor Environments by Learning from Floor Plans
  • Maciej Wozniak: Multimodal perception methods for autonomous driving
  • Marcel Büsching: Neural scene representation and rendering for dynamic scenes
  • Marnix Suilen: Robust Planning and Learning Under Uncertainty
  • Matteo Gamba: Generalization in Deep Learning Through the Lens of Smooth Interpolation
  • Matthew Ashman: Transformer Neural Processes
  • Matti Vahs: Risk-aware Safety of Robotic Systems via Control in Belief Space
  • Mayank Mittal: Learning whole-body manipulation for legged mobile manipulators
  • Merle Mirjam Reimann: Spoken Capability Communication in Human-Robot Interaction
  • Michele Antonazzi: Development and Adaptation of Robotic Vision in the Real-World
  • Miguel Vasco: Representation Learning and Decision-Making under Change
  • Minja Axelsson: Designing Ethical Robots for Wellbeing
  • Nam Hee Kim: Tackling Movement Mysteries with Reference-Divergent Learning
  • Navdeep Kumar: Robust Reinforcement Learning (RL), and Tree search in RL
  • Nona Rajabi: Extracting Intention and Perception Information from Brain Activity
  • Oliver Öhrstam Lindström: Beyond 2040 – A new generation of fighter systems in the making
  • Parag Khanna: Adaptive and Personalized Human-Robot Handovers
  • Paula Carbó Cubero: Local feature matching for heterogeneous visual SLAM
  • Priya Sundaresan: Towards Convenient Goal Specification & Teleoperation
  • Qingwen Zhang: Dynamic Awareness in Point Clouds
  • Quantao Yang: Scaling Robot Skill Acquisition through Foundation Models
  • Rafael Ignacio Cabral Muchacho: PDE-Based Adaptive Distance Functions and Applications to Robotic Manipulation
  • Rebecca Stower: Don’t fail me NAO: Understanding failures in human-robot interaction
  • Rebecka Winqvist: Learning from Graded Failure Feedback
  • Riddhiman Laha: Unified Reactive Motion Generation: Integration of Translation & Rotation through Coordinate Invariant Planning on Riemannian Manifolds
  • Ruiyu Wang: Deep Learning for Robotic Manipulation
  • Sarah Gillet: Computational Approaches to Interaction-Shaping Robotics
  • Shaohang Han: Motion Planning with Interaction and Risk Awareness
  • Shutong Jin: Attention as a Tool for Zero-shot Robot Learning
  • Tobias Löw: Geometric Algebra for Robot Learning, Control and Optimization
  • Wenjie Yin: Developing Data-Driven Models for Understanding Human Motion
  • Wenxi Wu: Explaining and Securing Robot Motion Planners
  • William Ahlberg: Human-in-the-Loop Learning for Believable Non-Player Characters
  • Xiaomeng Zhu: Sim2Real: Object detection for manufacturing applications.
  • Xixi Liu: Out-of-Distribution Detection for Reliable Deep Learning Models
  • Yi Yang: Motion Prediction in Autonomous Driving
  • Yiqing Xu: Imparting Human Objectives into Robots
  • Yongliang Wang: Self-supervised Target-Driven Object Manipulation in Constrained Clutter
  • Youssef Mohamed: Context Aware Affect Detection Using Thermal And Optical Sensors
  • Yuchong Zhang: Interactive AI for Human Perception in Robot Interaction: From Physicality to Virtuality
  • Yufei Duan: Long-horizon robotic manipulation

organising committee

Anna Gautier

Postdoc

Fereidoon zangeneh

Doctoral student

Grace Hung

Administrative coordinator

Jens Lundell

Postdoc

Maciej Wozniak

Doctoral student

Miguel Vasco

Postdoc

Olov andersson

Assistant professor

Patric Jensfelt

Professor

Organised by

Sponsors