Add enhanced API control over simulation objects

Discussion in 'HAPTIX' started by David Kluger, May 11, 2015.

  1. We would like the ability to change object colors and transparency via the API while a simulation is running. We would like this ability to be able to reproduce experiments that we have performed previously where we make targets for fingers of the virtual limb and provide feedback to the participant when he/she has their fingers in the correct targets. The feedback can be visual (in the form of changing the target color from red to green) or tactile (in the form of sensory stimulation on the neural interface). I have a video showing the experiment performed in our old VRE if you would like it for reference, but I cannot upload the video here because the forum does not accept .mp4 uploads.
     
  2. Emo Todorov

    Emo Todorov Administrator Staff Member

    Just changed the forum settings to allow video files; please try to upload it again. Yes, changing object color and transparency is one of the features we are currently adding to the extended API.
     
  3. Glad to hear color modification is going to be added to the new API. Is there a tentative release date?

    Unfortunately, the forum will still not let me upload a .mp4 file. What video file types are allowed?
     
  4. Emo Todorov

    Emo Todorov Administrator Staff Member

    Strange that you cannot upload a movie... I just did without any problems... can you try again and tell me what error message you are getting if any? The allowed file extensions are:

    zip
    txt
    pdf
    png
    jpg
    jpeg
    gif
    c
    cpp
    m
    xml
    urdf
    mjb
    mp4
    avi
    mpeg
    mkv
     
  5. When trying to upload a .mp4 video using the "Upload a File" button on the bottom right of the reply pane, I get the error message screen shown in the attached file.

    It reads, "The following error occurred/ The uploaded file does not have an allowed extension./ 'filename'"
     

    Attached Files:

  6. Emo Todorov

    Emo Todorov Administrator Staff Member

    I thought you were talking about uploading Resources... anyway, now I changed the Attachment Upload options as well, and uploaded an mp4 here as a test. The file size limit is 5000 KB.
     

    Attached Files:

  7. Success! The movie is attached.

    In this movie, we use the neural interface to decode finger movements from an open fist towards his palm to either a close or far target. The targets are rendered as translucent red spheres and turn green when the fingertips are inside the targets. He is not looking at the screen in this trial, but can use visual feedback for training trials. We stimulate nerve fibers via the neural interface when his fingers are in the target area. When he keeps his fingers in the target for a certain amount of time, he hears a beep and indicates whether he thinks the targets are close or far.

    There are two things we are doing with virtual objects in this video that we would like the ability to do with MuJoCo:
    1. Change object color based on finger position from the API (which you have indicated will be possible in the next release)
    2. Move the target object's position from commands called via the API
    Being able to move objects via the API also expands our experimental limitations with the VRE. We may want our volunteers to "track" objects with their fingers to further test our closed-loop control methods, i.e. have the target move instead of appear/disappear in specified locations.

    I hope this clears up some gray areas from my previous requests for expanded API control.
     

    Attached Files:

  8. Emo Todorov

    Emo Todorov Administrator Staff Member

    Nice demo! Re moving objects, MuJoCo has a special type of body called "mocap body". Such bodies are static for simulation purposes, but can be moved dynamically by external code. Currently the base of the hand is such a body. We have code (which is internal to the GUI but external to the simulator itself) that reads the OptiTrack data and moves this mocap body accordingly. The new API will allow getting and setting the positions and orientation of all mocap bodies. In this way you can achieve what you want, plus read the mocap data, and even replace the OptiTrack with an external motion capture system (although this will require some calibration steps which are presently automated). The new API should be released sometime this week.