RGB Camera Sensor

Discussion in 'Modeling' started by Ethan Brooks, Sep 25, 2017.

  1. First of all, I just want to thank Emo for the top-notch software that is MuJoCo and for all his help with it. I am working with a robot in MuJoCo that has several sensors, including an RGB camera, which is the only sensor not supported by MuJoCo. How do you recommend implementing one? I've noticed that a custom "user sensor" can be implemented, but I would still need to render the scene somehow. What is the easiest way to do this with MuJoCo?

    Thanks.
     
  2. Emo Todorov

    Emo Todorov Administrator Staff Member

    MuJoCo sensors are evaluated at each simulation step by convention. If camera views were to be rendered at the same rate, that would be too slow, and no one needs to simulate such a high-speed camera anyway. This is why they are not supported as official sensors. However, you can easily implement this functionality yourself. Add the desired camera to your model, select that camera in mjvCamera (unless you are happy with the default camera), and then render in the offscreen buffer and read back the pixels. See the code sample record.cpp:

    http://www.mujoco.org/book/source/record.cpp

    The size of the offscreen buffer is a model parameter which can be adjusted in the XML. The frame rate is up to your implementation.
     
  3. Emo, thank you for your response. If I understand the, file correctly, I am essentially doing this already -- pulling images from a secondary camera. The only issue is that the camera often cuts through walls. I know that that is a slightly separate issue, but is there a way to prevent that from happening?
     
  4. Emo Todorov

    Emo Todorov Administrator Staff Member

    What exactly that does that mean -- is it going through the wall? The camera by itself is not a rigid body which participates in collision detection. So if you it to collide with things, define a body with geoms, and attach the camera to it.
     
  5. There are times when a wall object (a box geom) seems to intersect the lens of the camera so that the view cuts through the wall, exposing the robot's view, the interior of the box geom and the other side of the wall. Perhaps this can be corrected by tightening the field of view somehow?
     
  6. Emo Todorov

    Emo Todorov Administrator Staff Member

    Oh, that is because the znear setting is too large for your model. The XML attribute you need to adjust is visual/scale/znear. It applies to all cameras in the model. Don't make it too small though, because the depth buffer will lose resolution.
     
  7. Excellent. That fixed it. Thank you.
     
  8. Is there any issue with simultaneously rendering on-screen and off-screen? For some reason my scenes are badly darkened (screenshot attached).[​IMG] Meanwhile, the off-screen camera returns images that are completely black. My thought is that this must have something to do with GLFW, maybe two different processes overwriting the same buffer.

    I should add, I am working with mujoco_py. I am working on developing some pure mujoco code that reproduces these results, but unless you are able to interpret mujoco_py code, I don't have anything to show you yet.
     

    Attached Files:

  9. Emo Todorov

    Emo Todorov Administrator Staff Member

    Your image looks like there isn't enough lighting. You seem to have a model light in addition to the headlight (or did you disable the headlight?) but maybe the model light should be brighter and also further from the objects (unless it is directional). Or add some extra lights. Note that to render planes well, the 3rd size parameter of the plane should be small. That parameter determines the spacing between grid lines used for lighting computation. You can see the grid by switching to wireframe mode. Lighting is computed at the vertices of the grid rectangles and interpolated. So if the grid rectangles are large and one vertex is in an area where the light is attenuated, its effect will be seen over the entire rectangle.

    Re off-screen vs. on-screen, the idea is that they should produce identical images. However I upgraded and cleaned up the off-screen rendering and buffer switching mechanism in MuJoCo 1.40, so if you are using 1.31 there may be glitches that I don't remember... anyway, 1.50 has a much better solver so definitely use that version if you can. Which version are you using? See the code sample record.cpp on how to use off-screen rendering and get the images from the GPU.
     
  10. Hi Emo. Sorry I've been delayed on my answer. We changed machines and I was unable to reproduce the bug.

    The issue is not with the lighting -- in fact the first frame would render normally, but subsequent frames would go dark. I was able to suppress the bug (at some apparent cost to performance) by calling mjr_makeContext and mjr_setBuffer every timestep. I may not have needed to use both, but they both come in a single method in mujoco_py. On my new machine I've encountered a different rendering bug that I just submitted a forum post about.
     
  11. Emo Todorov

    Emo Todorov Administrator Staff Member

    You don't need mjr_makeContext at each timestep. mjr_setBuffer is needed only if you did something that changed the buffer being used. This has minimal overhead though, so calling it at each step is not a problem.
     
  12. I was able to fix this by calling mjr_setBuffer before every call to mjr_render. This was the basis for another pull request to mujoco-py.
     
  13. @Ethan Brooks Hi Ethan, is it possible to see the code you added for the RGBD camera?
     
  14. Hi @OkayHuman ,
    Here is the c code that I am using. I have some Cython wrappers that I wrote around that but hopefully this will get you going. Also, I think it's RGB not RGBD.

    Let me know if you have questions.

    Code:
    #include "mujoco.h"
    #include "stdio.h"
    #include "stdlib.h"
    #include "string.h"
    
    typedef struct state_t {
        mjModel *m;
        mjData *d;
        mjvScene scn;
        mjrContext con;
        mjvCamera cam;
        mjvOption opt;
    } State;
    
    int renderOffscreen(unsigned char *rgb, int height, int width, State * state)
    {
        mjvScene scn = state->scn;
        mjrContext con = state->con;
        mjrRect viewport = {0, 0, height, width};
    
        // write offscreen-rendered pixels to file
        mjr_setBuffer(mjFB_OFFSCREEN, &con);
        if (con.currentBuffer != mjFB_OFFSCREEN)
            printf
                ("Warning: offscreen rendering not supported, using default/window framebuffer\n");
        mjr_render(viewport, &scn, &con);
        mjr_readPixels(rgb, NULL, viewport, &con);
        return 0;
    }