Peng Zhou

| Google Scholar | Github |

Currently, I am an assistant professor with the School of Advanced Engineering of The Great Bay University, and the Principal Investigator of the Embodied MAnipulation InteLligence (EMAIL) Robotics Lab. My research interests lie in the fields of robotics, machine learning and computer vision, with a focus on deformable object manipulation, robot perception and learning and task and motion planning.

Before that, I worked at the Robotic and Machine Intelligence (ROMI) Lab and received my Ph.D. degree in Robotics from The Hong Kong Polytechnic University, under the supervision of Dr. David Navarro-Alarcon. I also worked as a Postdoctoral Research Fellow at the University of Hong Kong (HKU) advised by Dr. Pan Jia. In 2021, I visited the Robotics, Perception and Learning (RPL) Lab at KTH as an exchange Ph.D. student under the supervision of Prof. Danica Kragic . Furthermore, during my Ph.D. study and subsequent research, I had the opportunity to collaborate with Dr. Jihong Zhu , Prof. Hesheng Wang , Dr. Pai Zheng and Prof. Charlie Yang .

  News
  Publications

Bimanual Deformable Bag Manipulation Using a Structure-of-Interest Based Latent Dynamics Model
Peng Zhou, Pai Zheng, Jiaming Qi, Chenxi Li, Hoi-yin Lee, Chenguang Yang, David Navarro-Alarcon, Jia Pan
IEEE/ASME Transactions on Mechatronics (T-Mech), 2024
*, : equal contribution, corresponding author
| arXiv | project page |

This paper introduces a novel approach to deformable object manipulation (DOM) by emphasizing the identification and manipulation of structures of interest (SOIs) in deformable fabric bags. We propose a bimanual manipulation framework that leverages a graph neural network (GNN)-based latent dynamics model to succinctly represent and predict the behavior of these SOIs.

Interactive Perception for Deformable Object Manipulation
Zehang Weng*, Peng Zhou*, Hang Yin, Alexander Kravberg, Anastasiia Varava, David Navarro-Alarcon, Danica Kragic
IEEE Robotics and Automation Letters (RA-L), 2024
*, : equal contribution, corresponding author
| arXiv |

In this work, we address such a problem with a setup involving both an active camera and an object manipulator. Our approach is based on a sequential decision-making framework and explicitly considers the motion regularity and structure in coupling the camera and manipulator.

Reactive human–robot collaborative manipulation of deformable linear objects using a new topological latent control model
Peng Zhou, Pai Zheng, Jiaming Qi, Chengxi Li, Hoi-Yin Lee, Anqing Duan, Liang Lu, Zhongxuan Li, Luyin Hu, David Navarro-Alarcon
Robotics and Computer-Integrated Manufacturing (RCIM), 2024
ESI Highly Cited + Hot Paper
*, : equal contribution, corresponding author
| arXiv | project page |

In this paper, a novel approach is proposed for real-time reactive deformable linear object manipulation in the context of human–robot collaboration. The proposed approach combines a topological latent representation and a fixed-time sliding mode controller to enable seamless interaction between humans and robots.

---- show more ----

  Awards
  • Track 3 champion, Zhuhai International Dexterous Manipulation Challenge, 2024
  • IEEE R10 Outstanding Volunteer Award, 2023
  • Outstanding Young Researcher, National Engineering Research Center, 2022
  • IEEE MGA Young Professional Achievement Award, 2022
  • Best Artificial Intelligence Application Award,Hong Kong AI Open Competition, 2022
  • Hong Kong Innovation and Technology Commission Research Talent Hub (RTH-ITF),2022
  • IEEE Young Professional, 2022
  • Outstanding Employee Award, Tencent, 2018
  • Outstanding Graduate, Tongji University, 2017
  • National Scholarship, Ministry of Education, China, 2016
  Service
Teaching Faculty, Perceptual Robotics (ME41006) (20-21 spring)

Teaching Faculty, Reinforcement Learning for Robotics (21-22 fall)
  Contact

Centre for Transformative Garment Production (TransGP)
Units 1215-1220, 12/F, Building 19W,
SPX1, Hong Kong Science Park,
Pak Shek Kok, N.T.,
Hong Kong, SAR.


Website design:
Avatar photo: generated in July 2024 by an AI app Miaoya Camera.