--- datasets: aadarshram/pick_place_tape library_name: lerobot license: apache-2.0 model_name: act pipeline_tag: robotics tags: - lerobot - act - robotics --- # Model Card for act This is an Action Chunking Transformer (ACT) model trained for pick and place tape task. Notes: 1. Model trained only with front camera to see how well it can perform with just that. 2. Partial Observabilty in front camera. During motion some parts of the robot go off camera's frame. Can the model handle that? 3. Pick location fixed and drop location has slight variations (considering the difficulty to learn to grasp a tape) Train logs: https://api.wandb.ai/links/ramachandranaadarsh-indian-institute-of-technology-madras/bf8lft8i s [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/ \ --policy.type=act \ --output_dir=outputs/train/ \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/ --wandb.enable=true ``` _Writes checkpoints to `outputs/train//checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=/eval_ \ --policy.path=/ \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0