Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
43 commits
Select commit Hold shift + click to select a range
31408a8
corrected broken links and broken references
yeshaokai Jul 29, 2024
38e7ce8
Update setup.cfg
MMathisLab Jul 30, 2024
591d056
Update pyproject.toml
MMathisLab Jul 30, 2024
84e01ee
Update version.py
MMathisLab Jul 30, 2024
5aea5a6
Corrected typo. Added config yamls in setup
yeshaokai Jul 30, 2024
4c4e3e7
Removed config files that are no longer needed
yeshaokai Jul 30, 2024
0077578
changed work from to pull the repo from git
yeshaokai Jul 30, 2024
8bd3884
Added comments to remind people to pay attentino to data folder in th…
yeshaokai Jul 30, 2024
b5f7629
fixed pypi typo
yeshaokai Jul 30, 2024
e14c42f
Fixed a bug in create_project. Changed default use_vlm to False. Upda…
yeshaokai Jul 31, 2024
4c4a2fa
Merge branch 'main' of github.com:AdaptiveMotorControlLab/AmadeusGPT
yeshaokai Jul 31, 2024
51ed033
Merge branch 'main' into shaokai/fix_create_project_bug
yeshaokai Jul 31, 2024
9d81d54
removed WIP 3d keypoints
yeshaokai Jul 31, 2024
0b24c62
Merge branch 'main' into shaokai/3d_pose
yeshaokai Jul 31, 2024
dbcb6c5
Fixed one more
yeshaokai Jul 31, 2024
c60a506
WIP
yeshaokai Aug 2, 2024
0744c9c
solved conflicts
yeshaokai Aug 2, 2024
53f2f25
enforcing the use of create_project in demo notebooks and modified th…
yeshaokai Aug 2, 2024
d38c619
3D supported. Better tests. More flexible identifier
yeshaokai Aug 5, 2024
435174d
black and isort
yeshaokai Aug 5, 2024
a827c3b
added dlc to test requirement
yeshaokai Aug 5, 2024
b96bb5e
Made test use stronger gpt. Added dummy video
yeshaokai Aug 5, 2024
0107838
easier superanimal test
yeshaokai Aug 5, 2024
2c0a728
Better 3D prompt and fixed self-debug
yeshaokai Aug 6, 2024
18dd339
preventing infinite loop
yeshaokai Aug 6, 2024
91e8255
better prompt for 3D
yeshaokai Aug 6, 2024
4c181f4
better prompt for 3D
yeshaokai Aug 6, 2024
d4afe71
better prompt
yeshaokai Aug 6, 2024
e79391d
updates
yeshaokai Aug 6, 2024
5f0cafa
fixed serialization
yeshaokai Aug 6, 2024
4383d46
extension to support animation. Made self-debugging work with bigger …
yeshaokai Aug 6, 2024
8e9c561
better interpolation and corrected x,y,z convention
yeshaokai Aug 6, 2024
0953af5
incorporated suggestions
yeshaokai Aug 7, 2024
d22d2d1
add a test plot keypoint label
yeshaokai Aug 8, 2024
2a04d82
Merge branch 'main' of github.com:AdaptiveMotorControlLab/AmadeusGPT
yeshaokai Aug 8, 2024
d08492b
Fixed a bug. Changed hardcoded path to relative path in notebooks
yeshaokai Aug 8, 2024
b962de3
Merge branch 'main' into shaokai/0.1.2_patch
yeshaokai Aug 8, 2024
9161f1c
conflict solved
yeshaokai Aug 8, 2024
f9ce60e
updated vlm prompt to be more robust
yeshaokai Aug 9, 2024
09db607
Merge branch 'shaokai/0.1.2_patch' of github.com:AdaptiveMotorControl…
yeshaokai Aug 9, 2024
07e4ae2
deleted y axis inversion prompt
yeshaokai Aug 9, 2024
71a8d1b
Added animation support and added animation in horse demo
yeshaokai Aug 9, 2024
f940c94
edited readme
yeshaokai Aug 9, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ You can git clone (or download) this repo to grab a copy and go. We provide exam
### Here are a few demos that could fuel your own work, so please check them out!

1) [Draw a region of interest (ROI) and ask, "when is the animal in the ROI?"](https://github.com/AdaptiveMotorControlLab/AmadeusGPT/tree/main/notebooks/EPM_demo.ipynb)
2) [Use a DeepLabCut SuperAnimal pose model to do video inference](https://github.com/AdaptiveMotorControlLab/AmadeusGPT/tree/main/notebooks/custom_mouse_demo.ipynb) - (make sure you use a GPU if you don't have corresponding DeepLabCut keypoint files already!
2) [Use your own data](https://github.com/AdaptiveMotorControlLab/AmadeusGPT/tree/main/notebooks/YourData.ipynb) - (make sure you use a GPU to run SuperAnimal if you don't have corresponding DeepLabCut keypoint files already!
3) [Write you own integration modules and use them](https://github.com/AdaptiveMotorControlLab/AmadeusGPT/tree/main/notebooks/Horse_demo.ipynb). Bonus: [source code](amadeusgpt/integration_modules). Make sure you delete the cached modules_embedding.pickle if you add new modules!
4) [Multi-Animal social interactions](https://github.com/AdaptiveMotorControlLab/AmadeusGPT/tree/main/notebooks/MABe_demo.ipynb)
5) [Reuse the task program generated by LLM and run it on different videos](https://github.com/AdaptiveMotorControlLab/AmadeusGPT/tree/main/notebooks/MABe_demo.ipynb)
Expand Down Expand Up @@ -126,6 +126,8 @@ the key dependencies that need installed are:
pip install notebook
conda install hdf5
conda install pytables==3.8
# pip install deeplabcut==3.0.0rc4 if you want to use SuperAnimal on your own videos

pip install amadeusgpt
```
## Citation
Expand Down
1 change: 1 addition & 0 deletions amadeusgpt/analysis_objects/visualization.py
Original file line number Diff line number Diff line change
Expand Up @@ -143,6 +143,7 @@ def __init__(
n_individuals: int,
average_keypoints: Optional[bool] = True,
events: Optional[List[BaseEvent]] = None,
use_3d: Optional[bool] = False,
):
assert len(keypoints.shape) == 3
super().__init__(axs)
Expand Down
7 changes: 6 additions & 1 deletion amadeusgpt/integration_modules/embedding/__init__.py
Original file line number Diff line number Diff line change
@@ -1,2 +1,7 @@
from .cebra import *
try:
import cebra
from .cebra import *
except:
print ('not able to import cebra')

from .umap import *
17 changes: 15 additions & 2 deletions amadeusgpt/managers/visual_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,8 +47,21 @@ def __init__(
self.animal_manager = animal_manager
self.object_manager = object_manager

def get_scene_image(self):
scene_frame_index = self.config["video_info"].get("scene_frame_number", 1)
@register_core_api
def get_scene_image(self, scene_frame_index: int |None = None)-> np.ndarray:
"""
Returns the frame given the index in the video.
Parameter
---------
scene_frame_index: int (optional) that specifies the index of the video frame.
Returns
-------
An ndarray image

For visualizing keypoints or keypoint labels, it's nice to overlay the keypoints on the scene image.
"""
if scene_frame_index is None:
scene_frame_index = self.config["video_info"].get("scene_frame_number", 1)
if os.path.exists(self.video_file_path):
cap = cv2.VideoCapture(self.video_file_path)
cap.set(cv2.CAP_PROP_POS_FRAMES, scene_frame_index)
Expand Down
6 changes: 5 additions & 1 deletion amadeusgpt/project.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,11 @@ def create_project(data_folder, result_folder, **kwargs):
"result_folder": result_folder,
"video_suffix": ".mp4",
},
"llm_info": {"max_tokens": 4096, "temperature": 0.0, "keep_last_n_messages": 2},
"llm_info": {"max_tokens": 4096,
"temperature": 0.0,
# let's use the best model by default
"gpt_model": "gpt-4o",
"keep_last_n_messages": 2},
"object_info": {"load_objects_from_disk": False, "use_grid_objects": False},
"keypoint_info": {
"use_3d": False,
Expand Down
13 changes: 7 additions & 6 deletions amadeusgpt/system_prompts/code_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -85,12 +85,13 @@ def get_watching_events(identifier):
4) Make sure you do not import any libraries in your code. All needed libraries are imported already.
5) Make sure you disintuigh positional and keyword arguments when you call functions in api docs
6) If you are writing code that uses matplotlib to plot, make sure you comment shape of the data to be plotted to double-check
7) if your plotting code plots coordinates of keypoints, make sure you invert y axis (only during plotting) so that the plot is consistent with the image
8) make sure the xlim and ylim covers the whole image. The image (h,w) is ({image_h},{image_w})
9) Do not define your own objects (including grid objects). Only use objects that are given to you.
10) You MUST use the index from get_keypoint_names to access the keypoint data of specific keyponit names. Do not assume the order of the bodypart.
11) You MUST call functions in api docs on the analysis object.
12) For api functions that require min_window and max_window, make sure you leave them as default values unless you are asked to change them.
7) make sure the xlim and ylim covers the whole image. The image (h,w) is ({image_h},{image_w})
8) Do not define your own objects (including grid objects). Only use objects that are given to you.
9) You MUST use the index from get_keypoint_names to access the keypoint data of specific keyponit names. Do not assume the order of the bodypart.
10) You MUST call functions in api docs on the analysis object.
11) For api functions that require min_window and max_window, make sure you leave them as default values unless you are asked to change them.
12) When making plots of keypoints of making animation about keypoints, try to overlap the plots with the scene frame if feasible.


HOW TO AVOID BUGS:
You should always comment the shape of the any numpy array you are working with to avoid bugs. YOU MUST DO IT.
Expand Down
2 changes: 1 addition & 1 deletion amadeusgpt/system_prompts/visual_llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ def _get_system_prompt():
```
The "description" has high level description of the image.
The "individuals" indicates the number of animals in the image
The "species" indicates the species of the animals in the image. You can only choose from one of "topview_mouse", "sideview_quadruped" or "others".
The "species" indicates the species of the animals in the image. You can only choose from one of "topview_mouse", "sideview_quadruped" or "others". Note all quadruped animals should be considered as sideview_quadruped.
The "background_objects" is a list of background objects in the image.
Explain your answers before you fill the answers. Make sure you only return one json string.
"""
Expand Down
15 changes: 11 additions & 4 deletions amadeusgpt/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -212,7 +212,7 @@ def create_qa_message(query: str, video_file_paths: list[str]) -> QA_Message:
return QA_Message(query, video_file_paths)


from IPython.display import Markdown, Video, display
from IPython.display import Markdown, Video, display, HTML


def parse_result(amadeus, qa_message, use_ipython=True, skip_code_execution=False):
Expand All @@ -231,13 +231,20 @@ def parse_result(amadeus, qa_message, use_ipython=True, skip_code_execution=Fals
)
if use_ipython:
if len(qa_message.out_videos) > 0:
for video_path, event_videos in qa_message.out_videos.items():
for identifier, event_videos in qa_message.out_videos.items():
for event_video in event_videos:
display(Video(event_video, embed=True))

if use_ipython:
from matplotlib.animation import FuncAnimation
if len(qa_message.function_rets) > 0:
for video_file_path in qa_message.function_rets:
display(Markdown(str(qa_message.function_rets[video_file_path])))
for identifier, rets in qa_message.function_rets.items():
if not isinstance(rets, (tuple, list)):
rets = [rets]
for ret in rets:
if isinstance(ret, FuncAnimation):
display(HTML(ret.to_jshtml()))
else:
display(Markdown(str(qa_message.function_rets[identifier])))

return qa_message
6 changes: 3 additions & 3 deletions notebooks/EPM_demo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@
"metadata": {},
"outputs": [],
"source": [
"behavior_analysis = amadeus.get_behavior_analysis('/Users/shaokaiye/AmadeusGPT-dev/examples/EPM/EPM_11.mp4')\n",
"behavior_analysis = amadeus.get_behavior_analysis('../examples/EPM/EPM_11.mp4')\n",
"behavior_analysis.gui_manager.add_roi_from_video_selection()"
]
},
Expand Down Expand Up @@ -174,9 +174,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "amadeusgpt-minimal",
"display_name": "amadeusgpt-cpu",
"language": "python",
"name": "python3"
"name": "amadeusgpt-cpu"
},
"language_info": {
"codemirror_mode": {
Expand Down
12 changes: 10 additions & 2 deletions notebooks/Horse_demo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,10 @@
"\n",
"kwargs = { \n",
" \"video_info.scene_frame_number\" : scene_frame_number,\n",
" \"llm_info\": {\n",
" \"gpt_model\": \"gpt-4o\",\n",
" }\n",
"\n",
"}\n",
"\n",
"config = create_project(data_folder = \"../examples/Horse\",\n",
Expand All @@ -61,7 +65,7 @@
"metadata": {},
"outputs": [],
"source": [
"behavior_analysis = amadeus.get_behavior_analysis('/Users/shaokaiye/AmadeusGPT-dev/examples/Horse/BrownHorseinShadow.mp4')\n",
"behavior_analysis = amadeus.get_behavior_analysis(video_file_path = '../examples/Horse/BrownHorseinShadow.mp4')\n",
"scene_image = behavior_analysis.visual_manager.get_scene_image()\n",
"plt.imshow(scene_image)"
]
Expand All @@ -84,7 +88,11 @@
"id": "e394c4e0",
"metadata": {},
"outputs": [],
"source": []
"source": [
"query = \"\"\" make an animation of the horse keypoints over time. Overlap the image frame on it. Save the animation on the disk. \"\"\"\n",
"qa_message = amadeus.step(query)\n",
"qa_message = parse_result(amadeus, qa_message)"
]
}
],
"metadata": {
Expand Down
3 changes: 2 additions & 1 deletion notebooks/MABe_demo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,8 @@
"metadata": {},
"outputs": [],
"source": [
"behavior_analysis = amadeus.get_behavior_analysis('/Users/shaokaiye/AmadeusGPT-dev/examples/MABe/EGS8X2MN4SSUGFWAV976.mp4')\n",
"behavior_analysis = amadeus.get_behavior_analysis(video_file_path='../examples/MABe/EGS8X2MN4SSUGFWAV976.mp4',\n",
" keypoint_file_path='../examples/MABe/EGS8X2MN4SSUGFWAV976.h5')\n",
"scene_image = behavior_analysis.visual_manager.get_scene_image()\n",
"plt.imshow(scene_image)"
]
Expand Down
4 changes: 3 additions & 1 deletion notebooks/MausHaus_demo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,9 @@
"metadata": {},
"outputs": [],
"source": [
"behavior_analysis = amadeus.get_behavior_analysis('/Users/shaokaiye/AmadeusGPT-dev/examples/MausHaus/maushaus_trimmed.mp4')\n",
"behavior_analysis = amadeus.get_behavior_analysis(video_file_path='../examples/MausHaus/maushaus_trimmed.mp4',\n",
" keypoint_file_path='../examples/MausHaus/maushaus_trimmed.h5')\n",
"\n",
"behavior_analysis.gui_manager.add_roi_from_video_selection()"
]
},
Expand Down
50 changes: 37 additions & 13 deletions notebooks/custom_mouse_video.ipynb → notebooks/YourData.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,8 @@
"outputs": [],
"source": [
"from amadeusgpt import AMADEUS\n",
"from amadeusgpt.config import Config\n",
"from amadeusgpt.utils import parse_result\n",
"import amadeusgpt\n",
"from amadeusgpt import create_project\n",
"import matplotlib.pyplot as plt\n",
"import cv2"
"from amadeusgpt import create_project"
]
},
{
Expand All @@ -35,7 +31,7 @@
"metadata": {},
"source": [
"### Note that unlike other notebooks, we don't have keypoint_file_path here (as it's not provided)\n",
"### By default, we use gpt-4o to determine which SuperAnimal models to run and it will run SuperAnimal in the first time the keypoints related queries are asked\n",
"### By default, we use gpt-4o to determine which SuperAnimal models to run and it will run SuperAnimal in the first time the keypoints related queries are asked. Note to use superanimal, you will need to install the newest DeepLabCut.\n",
"### Make sure you use a short video clips if you are not using GPUs in Linux (Mac silicon support to be added)"
]
},
Expand All @@ -46,16 +42,44 @@
"metadata": {},
"outputs": [],
"source": [
"scene_frame_number = 400\n",
"\n",
"# where you store you video and (optionally) keypoint files\n",
"data_folder = \"temp_data_folder\"\n",
"# If you don't have keypoint files, we would try to run SuperAnimal on your video\n",
"# If you have pair of video and keypoint files, make sure they follow the naming convention as following:\n",
"\n",
"# your_folder\n",
"# - cat.mp4\n",
"# - cat.h5 (DLC output)\n",
"# - dog.mp4\n",
"# - dog.h5 (DLC output)\n",
"\n",
"data_folder = \"../examples/Horse\"\n",
"result_folder = \"temp_result_folder\"\n",
"video_suffix = \".mp4\"\n",
"\n",
"config = create_project(data_folder, result_folder, video_suffix = video_suffix)\n",
"# if you want to overwrite the default config, you can do it here\n",
"kwargs = {\n",
" \"data_info\": {\n",
" \"data_folder\": data_folder,\n",
" \"result_folder\": result_folder,\n",
" # can only locate videos specified in video_suffix\n",
" \"video_suffix\": \".mp4\",\n",
" },\n",
" \n",
" \"llm_info\": {\"max_tokens\": 4096, \n",
" \"temperature\": 0.0, \n",
" # one can swtich this to gpt-4o-mini for cheaper inference with the cost of worse performance.\n",
" \"gpt_model\": \"gpt-4o\",\n",
" # We only keep conversation history of 2. You can make it longer with more cost. We are switching to a different form of long-term memory.\n",
" \"keep_last_n_messages\": 2},\n",
" \"keypoint_info\": {\n",
" # only set True if you work with 3D keypoint \n",
" \"use_3d\": False,\n",
" },\n",
" # this is the frame index for gpt-4o to match the right superanimal model.\n",
" \"video_info\": {\"scene_frame_number\": 1},\n",
" }\n",
"\n",
"config[\"scene_frame_number\"] = scene_frame_number\n",
"config = create_project(data_folder, result_folder, video_suffix = video_suffix, **kwargs)\n",
"\n",
"amadeus = AMADEUS(config, use_vlm = True)\n",
"video_file_paths = amadeus.get_video_file_paths()\n",
Expand Down Expand Up @@ -89,9 +113,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "amadeusgpt-cpu",
"display_name": "amadeusgpt-minimal",
"language": "python",
"name": "amadeusgpt-cpu"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
Expand Down
2 changes: 1 addition & 1 deletion tests/test_3d.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,4 +30,4 @@ def test_3d_maushaus():

qa_message = amadeus.step(query)

parse_result(amadeus, qa_message, use_ipython=False)
parse_result(amadeus, qa_message, use_ipython=False)