|
|
|
|
|
|
|
|
VNU University of Engineering and Technology, Vietnam Department of Robotics, Hanyang University, Korea , 2026. |
|
|
|
| Thanh Nguyen Canh, Thanh Tuan Tran, Haolan Zhang, Ziyan Gao, Xiem HoangVan,
Nak Young Chong Learning to Manipulate by Watching Humans: A Decoupled Vision-Language-Driven Imitation Framework 2026. [pdf] |
|
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
|
| Video-to-command generation performance (BLEU scores) on standard object sets (bold and underline are best and second best, respectively). | |
|
|
|
|
|
|
|
|
|
|
| Reach action with UR5 robot (simulation). | ||||
|
|
|
|
|
|
| Pick action with UR5 robot (simulation). | ||||
|
|
|
|
|
|
| Move action with UR5 robot (simulation). | ||||
|
|
|
|
|
|
| Put action with UR5 robot (simulation). | ||||
|
|
|
|
|
|
| Reach action with UF850 robot (simulation). | ||||
|
|
|
|
|
|
| Pick action with UF850 robot (simulation). | ||||
|
|
|
|
|
|
| Move action with UF850 robot (simulation). | ||||
|
|
|
|
|
|
| Put action with UF850 robot (simulation). | ||||
|
|
|
| Real experiment with Reach and Pick action using UF850 robot. | |
AcknowledgementsThis webpage template was borrowed from https://akanazawa.github.io/cmr/. |