Improving the Zero-Shot Generalizability of Vision-Language-Action models with Kochv1.1 and SO-101 Grippers
Published:
Authors: Alex Huang, in collaboration with IRVL Lab at UT Dallas.
TLDR (Too long, didn't/don't wanna read)?: Jump to the video here.
Current Progress
This project aims to explore how we can improve the generalization ability of policies for robotic devices like the Kochv1.1 or SO-101, both grippers optimized for object grasping and manipulation.
10-1-25: Installed LeRobot, conda, and necesary packages for calibration and motor setup.
10-8-25: Successfully set up teleoperation and cable management.
10-17-25: Working on implementing OpenCV for AprilTag detection (transformation matrices determination) first, and then image collection for the imitation learning.
Quick Video Summary
Regular updates will be posted here.
