Starting in January 2026, Bionix has a growing need for a capable and reliable computer vision system. Henceforth referred to as the CVKAS (computer vision kinematic analysis system), it will be used to collect and record kinematic data to assist development of Bionix’s EMG-controlled prosthetic leg device.
I (Simon Wong) have previously created the document computer_vision_roadmap, which details potential uses of the CVKAS. The current document Computer Vision System Details should be considered the primary source of specifications and requirements for the CVKAS.
The files in the folder Video Kinematic Analysis are an example of how computer vision has been used by Bionix in the past, although the eventual application of the CVKAS is much broader and more advanced.
Figure 1. Sketch of the CVKAS being used with the gantry (top), and kinematic models (bottom).
The CVKAS will be used to track kinematics of (only one of the list items below at a time): The gantry and prosthetic leg. The leg of a human participant.
The CKVAS needs to: Reliably and accurately track the linear position, velocity, and acceleration data of points such as the hip, knee, and ankle joints (low hysteresis and high sensitivity). Use advanced visual/spatial sensors such as stereoscopic cameras and time-of-flight sensors to collect kinematic data that is representative of the Bionix and human legs’ real-world physics. Store the raw visual data and kinematic data so that it can easily be used in other software for simulation and visualization of past tests. Function equally well for tracking the Binoix leg/gantry as human legs.
The people below are listed in the descending order of their proximity to the CVKAS development:
- Simon Wong — AI and computer vision lead. Previously developed computer vision pipeline for evaluating the internal transmission of the Binoix leg.
- Lance Quinto — Software lead. In charge of many software development efforts within Bionix.
General requirements: AruCo Markers, Camera, Visible Setting, Adhesives
- Pull the repository
- Connect a working camera with HD quality
- In CVKAS filepath
- ~/CVKAS/
- Run python3 generateMarkers.py
- This should initalize 3 markers aruco_[id].png
- Depending on distance print the markers of sizes:
- 1M: 5-8CM, 2M: 8-12CM, 3M: 12-15CM
- Markers are labeled: aruco_0.png := hip, aruco_1.png := knee, aruco_2.png := ankle
- Attach the corresponding markers to the correct positions
- In the filepath: ~/CVKAS
- run: python3 cvkas.py
- Click Start Camera
- This should now open a camera
-
With the AruCo Markers attached place the individual or prosthetic at the measured distance
-
Check for identification through the UI on the app
-
i.e linear position: __, detected velocity: __, ..., etc..
-
check for lines segmenting the markers
-
Click Start Recording
-
begin your movements
-
Click Stop Recording
-
This should now save a video format aswell as a CSV file for the data collection
-
Work in progress