Skip to content

JS-RML/Dynamic-Scoop-and-Flick-Manipulation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Dynamic Scoop-and-Flick Manipulation for Rapid Non-Prehensile High-Arc Object Transfer (2025)

1. Overview

This repository contains the software implementation of Dynamic Scoop-and-Flick Manipulation, a technique for dynamically manipulating objects to launch them in a non-prehensile manner using a custom-built direct-drive finger. Our approach leverages concepts from high-speed scooping and slingshot mechanisms to achieve rapid, high-arc object transfer.

Dynamic Scoop & Flick Manipulation

Transferring Various Unseen Objects

Our system successfully transfers unseen real-world objects with different shapes, masses, and materials.

Related Repositories


2. Prerequisites

2.1 Hardware

Throw demo

- [**UR5e**](https://www.universal-robots.com/products/ur5-robot/): Collaborative Robot Arm - [**Direct-Drive Gripper**](https://github.com/JS-RML/Direct-Drive-Gripper-with-Swivel-Fingertips/tree/main)

2.2 Software

Our software is implemented with python3 and tested on Ubuntu 20.04.

Git clone our software.

git clone https://github.com/JS-RML/Dynamic-Scoop-and-Flick-Manipulation.git

Install odrivetool. If you've already installed this pakage when you calibrate odrive, you can skip this step.

pip install --upgrade odrivetool

Connect your motor drivers (odriveS1) with your PC and run odrivetool in the terminal.

odrivetool

If successful, you will see the following output.

ODrive control utility v0.6.7
Please connect your ODrive
You can also type help() or quit().

Connected to ODrive S1 3868345A3539 (firmware v0.6.7) as odrv0
Connected to ODrive S1 3866346F3539 (firmware v0.6.7) as odrv1

Record these serial numbers (3868345A3539, 3866346F3539) to create Actuator objects later.

Then, install the software dependencies:

pip3 install -r requirements.txt

ROS service message build

  1. Build the workspace and source it:
cd ./catkin_ws
catkin_make
source devel/setup.bash
  1. Verify the service type is generated:
rossrv show interfaces/Input

3. Run Dynamic Scoop-and-Flick Manipulation

3.1 Before running the code

Check robot connection

Check that the computer and the robot are on the same subnet (e.g., if the robot IP is 192.168.1.x, the computer IP should be 192.168.1.y, and the subnet mask should be 255.255.255.0).

To verify the connection, run the following command in your terminal (replace 192.168.1.45 with your robot's actual IP):

ping 192.168.1.45

If successful, you should see output similar to the following:

PING 192.168.1.45 (192.168.1.45) 56(84) bytes of data.
64 bytes from 192.168.1.45: icmp_seq=1 ttl=64 time=0.342 ms
64 bytes from 192.168.1.45: icmp_seq=2 ttl=64 time=0.410 ms
64 bytes from 192.168.1.45: icmp_seq=3 ttl=64 time=0.388 ms
...

After checking the connection, open the main files (e.g., main.py) and modify the ROBOT_IP variable with your robot's IP address.

ROBOT_IP = "192.168.1.45"

Modify the gripper code

Modify 'GRIPPER.py' as follows.

(1) Define the variables SN_L and SN_R using the serial numbers aforementioned.

SN_L = '3868345A3539'
SN_R = '3866346F3539'

(2) Opne the Odrive GUI. Rotate the link so that it points exactly south, then power on and record the pos_estimate value. Update the offset using the recorded value.

offset_L = -0.415
offset_R = -0.093

How to customize control parameters

There are key parameters for scoop-and-flick manipulation.

Objcet Parameters

  • mass : object mass

Hardware Setup

Gripper Parameters

  • ($\delta_{\text{init}}$)Initial Delta : The relative orientation between the two internal links at the initial configuration
  • ($\alpha_{\text{init}}$)Initial Alpha : The angle of attack of the fingertip with respect to the ground surface at the initial configuration
  • ($\delta_{\text{init}}$)Scoop Delta : The relative orientation between the two internal links at the scooping configuration
  • ($\alpha_{\text{init}}$)Scoop Alpha : The angle of attack of the fingertip with respect to the ground surface at the scooping configuration
  • ($\delta_{\text{init}}$)Flick Delta : The relative orientation between the two internal links at the flicking configuration
  • ($\alpha_{\text{init}}$)Flick Alpha : The angle of attack of the fingertip with respect to the ground surface at the flicking configuration
  • Positional Gain : Motor P-gain
  • Velocity Gain : Motor D-gain

Arm's Motion Parameters

  • a : Aceleration for speedl
  • a_decel : Deceleration for stopl

3.2 Collecting Data

To collect data for characterizing the flicking dynamics of the finger, init_alpha and flick_alpha are randomly varied, the corresponding control parameters are logged for each trial.

(1) The sampling range of init_alpha and flick_alpha can be specified by editing the following lines in main_only_finger.py.

INIT_ALPHA  = random.uniform(20, 40)
FLICK_ALPHA = random.uniform(-10, 10)

(2) Run main_only_finger.py.

python3 main_only_finger.py

Use Tracker or another vision system to record the height and range of each trial.

3.3 Model Training and Usage

Data Format

Training data in CSV format with columns: mass, height, range, init_alpha, thr_alpha. The model predicts action parameters (init_alpha, thr_alpha) from object mass and desired trajectory features.

Training

To mitigate sample imbalance, we estimate the sample density with a Gaussian Kernel Density Estimation (KDE) model and use KDE-based weights during training.

Train progressive KDE-weighted models:

python model_trainer.py train_all \
    --data train_set_sum.csv \
    --model_dir models_progressive_kde \
    --max_samples 440 \
    --step 40 \
    --epochs 1000 \
    --kde_bandwidth 0.1

Key Parameters:

  • --max_samples: Maximum training samples
  • --step: Progressive increment size
  • --epochs: Training epochs per model
  • --kde_bandwidth: KDE weighting bandwidth

Other modes: test, visualize, compare (see python model_trainer.py --help)

Model Architecture

  • Input: 3 features (Mass, Height, Range). (Old: mass and parabolic trajectory coefficients a, b, c.)
  • Architecture: 3-layer MLP with 128 hidden units, ReLU activation, Dropout (0.25)
  • Output: 2 parameters (Initial Alpha, Flick Alpha)
  • Training: Progressive learning with KDE weighting for balanced sampling

Deployment

Start ROS prediction service:

python nn_action_service.py --model_dir models_progressive_kde/model_440

3.4 Let the robot scoop-and-flick manipulaiton

Run main.py.

python3 main.py --mass 21 --height 0.5 --range 0.5 --bucket_range 1.3

4. Workflow

Collecting data:

  1. Run random command generator.
  2. Collect training data from robot experiments → train_set_sum.csv

Training:

  1. Train models: python model_trainer.py train_all --data train_set_sum.csv
  2. Evaluate models by rolling out models in real world.

Deployment:

  1. Launch ROS
  2. Start prediction service: python nn_action_service.py --model_dir <path>
  3. Call service with object mass and desired trajectory parameters
  4. Execute manipulation with predicted action parameters

5. Timeline of Dynamic Scoop-and-Flick Manipulation

Timeline

  • Initial Configuration (t = 0.0 s): The TCP is positioned in front of the object. The system is triggered to accelerate to achieve commanded velocity to increase the object's range while maintaining the same maximum height for predictable scaling of the throwing radius.
  • Scoop & Acceleration Start (t = 0.22 s): Upon the start trigger, the arm begins accelerating in the throwing direction while the finger scoops the object from the ground. To execute the flick at a consistent horizontal release location across trials, the tip position is tracked at 500 Hz using the TCP pose and gripper kinematics.
  • Flick (t = 0.86 s): While maintaining the commanded horizontal speed, a flick command is triggered when the monitored tip position reaches the target release location. This high-rate feedback enforces near-identical horizontal release positions after scooping, improving repeatability of the release condition and range control.
  • Follow-Through & Deceleration Start (t = 1.12 s): Immediately after the flick, the arm maintains the horizontal velocity for ~0.2 s to ensure sufficient momentum transfer until the object fully separates from the fingertip. After the follow-through, the arm begins decelerating, improving trial-to-trial reproducibility.

Robot state during the scooping process

The solid (dotted) lines represent the measured (commanded) values of each parameter. The vertical dotted lines indicates the moment when the robot receives the triggers.

Timeline


About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors