Hello, I'm Yuhao Lu

I recieved my master degree from Department of Electronic Engineering, Tsinghua University, advised by Shengjin Wang, and I work on robotics and computer vision.


News

  • Feb 2022: FGC_GraspNet has been accepted to ICRA 2022.
  • Jun 2022: We built a Visual Grounding dataset for robotics, RoboRefIt.
  • Mar 2023: I led the CV-AI team to participate in 2023 Intel Indoor Robot Learning Grand Challenge in Shanghai China, and won the FRIST PRIZE in the Recognition Track (Few-shot detection task) and the SECOND PRIZE in the Manipulation Track (Grasping task and Feeding task).
  • Jun 2023: I received the honor of 2023 Outstanding Master’s Graduate Student from Tsinghua University (top 2%). I also received the honor of 2023 Outstanding Master’s Thesis from Tsinghua University (top 5%).
  • Jun 2023: VL_Grasp has been accepted to IROS 2023.

Publications

Hybrid Physical Metric For 6-DoF Grasp Pose Detection

Hybrid Physical Metric For 6-DoF Grasp Pose Detection

ICRA, 2022

We proposed the hybrid physical metric to generate confidence labels for 6-dof grasp poses, and designed a transformer-based point cloud learning network, FGC_graspnet, to predict 6-dof grasp poses.

VL-Grasp: a 6-Dof Interactive Grasp Policy for Language-Oriented Objects in Cluttered Indoor Scenes

VL-Grasp: a 6-Dof Interactive Grasp Policy for Language-Oriented Objects in Cluttered Indoor Scenes

IROS, 2023

The VL-Grasp is a interactive grasp policy combined with visual grounding and 6-dof grasp pose detection tasks. The robot can adapt to various observation views and more diverse indoor scenes to grasp the target according to a human's language command by applying the VL-Grasp. Meanwhile, we build a new visual grounding dataset specially designed for the robot interaction grasp task, called RoboRefIt.