Reinforcement learning library for real-time robot control

Discussion in 'Simulation' started by Ilija Stanojkovic, Apr 8, 2019.

  1. Hi all, I'm in the search for a RL stack that would be suitable to work for both simulated and real robot. Currently, what I had in mind was to use ROS to describe the robot (xacro/urdf), then Mujoco to simulate physics, and OpenAI gym for encapsulating RL algorithms. It seems however, that this stack is only suitable for simulation, but what I would be interested in is a set of tools/libs which are independent of the underlying control mechanism. Meaning, I would like to implement and benchmark RL algorithms, and swap out simulated or real robot as needed, without the changes in algos. I would prefer the RL lib to be in python, but I'm open to all your suggestions. Thanks :)

    P.S. If this is the wrong place for the question, please point me to the write direction or move the thread to a dedicated subforum.
     
  2. I doubt that there is an "out of the box" RL stack that supports various types of hardware and provides the corresponding simulation model (mcjf urdf or whatever). I always find hardware-settings to impose very problem specific constraints on the software side. However just to give you an idea. We in our research group tend to have a very accurately modeled twin of our real hardware and the control interface (e.g. the pd+ controller that produces joint torques from a deviation to a reference joint-position trajectory). Now both mock and real hardware share a common command and measurement interface. Both real hardware and mock "subscribe" to a common actuation message and "publish" their measurements on a respective topic. I hope this helps ...
     
  3. Thanks for the reply!
    We ended up pursuing a direction quite similar to what have you now described.