From 1e05356e1ec741e497195742cdabce7a91003044 Mon Sep 17 00:00:00 2001 From: Jaap de Ruyter Date: Thu, 13 Nov 2025 16:37:08 +0100 Subject: [PATCH 1/4] fix broken links in documentation --- docs/Governance.md | 2 +- docs/UseOverviewGuide.md | 2 +- docs/beginner-guides/beginners-guide.md | 2 +- docs/gui/napari_GUI.md | 2 +- docs/installation.md | 2 -- docs/maDLC_UserGuide.md | 4 ++-- docs/recipes/ClusteringNapari.md | 2 +- docs/recipes/MegaDetectorDLCLive.md | 2 +- docs/recipes/TechHardware.md | 2 +- 9 files changed, 9 insertions(+), 11 deletions(-) diff --git a/docs/Governance.md b/docs/Governance.md index ff7477f28e..187379cc1c 100644 --- a/docs/Governance.md +++ b/docs/Governance.md @@ -74,7 +74,7 @@ developer community (including the SC members) fails to reach such a consensus in a reasonable timeframe, the SC is the entity that resolves the issue. Members of the steering council also have the "owner" role within the [DeepLabCut GitHub organization](https://github.com/DeepLabCut/) -and are ultimately responsible for managing the DeepLabCut GitHub account, the [@DeepLabCut](https://twitter.com/DeepLabCut) +and are ultimately responsible for managing the DeepLabCut GitHub account, the [@DeepLabCut](https://x.com/DeepLabCut) twitter account, the [DeepLabCut website](http://www.DeepLabCut.org), and other similar DeepLabCut owned resources. The current steering council of DeepLabCut consists of the original developers: diff --git a/docs/UseOverviewGuide.md b/docs/UseOverviewGuide.md index b71f25524f..017b30dcc4 100644 --- a/docs/UseOverviewGuide.md +++ b/docs/UseOverviewGuide.md @@ -39,7 +39,7 @@ We are primarily a package that enables deep learning-based pose estimation. We - [HOW-TO-GUIDES:](overview) step-by-step user guidelines for using DeepLabCut on your own datasets (see below) - [EXPLANATIONS:](https://github.com/DeepLabCut/DeepLabCut-Workshop-Materials) resources on understanding how DeepLabCut works - [REFERENCES:](https://github.com/DeepLabCut/DeepLabCut#references) read the science behind DeepLabCut - - [BEGINNER GUIDE TO THE GUI](https://deeplabcut.github.io/DeepLabCut/docs/beginners-guide.html) + - [BEGINNER GUIDE TO THE GUI](https://deeplabcut.github.io/DeepLabCut/docs/beginner-guides/beginners-guide.html) Getting Started: [a video tutorial on navigating the documentation!](https://www.youtube.com/watch?v=A9qZidI7tL8) diff --git a/docs/beginner-guides/beginners-guide.md b/docs/beginner-guides/beginners-guide.md index c19d56410e..4af17f7ce6 100644 --- a/docs/beginner-guides/beginners-guide.md +++ b/docs/beginner-guides/beginners-guide.md @@ -134,4 +134,4 @@ When you first launch the GUI, you'll find three primary main options: ![DeepLabCut Create Project GIF](https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/1717779616437-30U5RFYV0OY6ACGDG7F4/create-project.gif?format=500w) -## Next, head over the beginner guide for [Setting up what keypoints to track](https://deeplabcut.github.io/DeepLabCut/docs/manage-project) +## Next, head over the beginner guide for [Setting up what keypoints to track](https://deeplabcut.github.io/DeepLabCut/docs/beginner-guides/manage-project.html) diff --git a/docs/gui/napari_GUI.md b/docs/gui/napari_GUI.md index 4875b23237..7635b03745 100644 --- a/docs/gui/napari_GUI.md +++ b/docs/gui/napari_GUI.md @@ -1,7 +1,7 @@ (napari-gui)= # napari labeling GUI -We replaced wxPython with PySide6 + as of version 2.3. Here is how to use the napari-aspects of the new GUI. It is available in napari-hub as a stand alone GUI as well as integrated into our main GUI, [please see docs here](https://deeplabcut.github.io/DeepLabCut/docs/PROJECT_GUI.html). +We replaced wxPython with PySide6 + as of version 2.3. Here is how to use the napari-aspects of the new GUI. It is available in napari-hub as a stand alone GUI as well as integrated into our main GUI, [please see docs here](https://deeplabcut.github.io/DeepLabCut/docs/gui/PROJECT_GUI.html). [![License: BSD-3](https://img.shields.io/badge/License-BSD3-blue.svg)](https://www.gnu.org/licenses/bsd3) [![PyPI](https://img.shields.io/pypi/v/napari-deeplabcut.svg?color=green)](https://pypi.org/project/napari-deeplabcut) diff --git a/docs/installation.md b/docs/installation.md index 29c3e27848..13f0574a2a 100644 --- a/docs/installation.md +++ b/docs/installation.md @@ -300,8 +300,6 @@ Here are some additional resources users have found helpful (posted without endo - https://developer.nvidia.com/cuda-toolkit-archive -- http://www.python36.com/install-tensorflow-gpu-windows/ - FFMPEG: diff --git a/docs/maDLC_UserGuide.md b/docs/maDLC_UserGuide.md index 5532402570..9861a41242 100644 --- a/docs/maDLC_UserGuide.md +++ b/docs/maDLC_UserGuide.md @@ -991,10 +991,10 @@ Now, you can run any of the functions described in this documentation. [![Gitter](https://badges.gitter.im/DeepLabCut/community.svg)](https://gitter.im/DeepLabCut/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) - If you want to share some results, or see others: -[![Twitter Follow](https://img.shields.io/twitter/follow/DeepLabCut.svg?label=DeepLabCut&style=social)](https://twitter.com/DeepLabCut) +[![Twitter Follow](https://img.shields.io/twitter/follow/DeepLabCut.svg?label=DeepLabCut&style=social)](https://x.com/DeepLabCut) - If you have a code bug report, please create an issue and show the minimal code to reproduce the error: https://github.com/DeepLabCut/DeepLabCut/issues -- if you are looking for resources to increase your understanding of the software and general guidelines, we have an open source, free course: http://DLCcourse.deeplabcut.org. +- if you are looking for resources to increase your understanding of the software and general guidelines, we have an open source, free course: https://deeplabcut.github.io/DeepLabCut/docs/course.html. **Please note:** what we cannot do is provided support or help designing your experiments and data analysis. The number of requests for this is too great to sustain in our inbox. We are happy to answer such questions in the forum as a community, in a scalable way. We hope and believe we have given enough tools and resources to get started and to accelerate your research program, and this is backed by the >700 citations using DLC, 2 clinical trials by others, and countless applications. Thus, we believe this code works, is accessible, and with limited programming knowledge can be used. Please read our [Missions & Values statement](mission-and-values) to learn more about what we DO hope to provide you. diff --git a/docs/recipes/ClusteringNapari.md b/docs/recipes/ClusteringNapari.md index 00a3b9f514..a12fa75142 100644 --- a/docs/recipes/ClusteringNapari.md +++ b/docs/recipes/ClusteringNapari.md @@ -36,7 +36,7 @@ Your contributions and suggestions are welcomed, so test the [PR](https://github.com/DeepLabCut/napari-deeplabcut/pull/38) and give us feedback! This #cookbook recipe aims to show a usecase of **clustering in napari** and is contributed by 2022 DLC AI Resident -[Sabrina Benas](https://twitter.com/Sabrineiitor) πŸ’œ. +[Sabrina Benas](https://x.com/Sabrineiitor) πŸ’œ. ## Detect Outliers to Refine Labels diff --git a/docs/recipes/MegaDetectorDLCLive.md b/docs/recipes/MegaDetectorDLCLive.md index 8af6f59d1b..ecdf3432c7 100644 --- a/docs/recipes/MegaDetectorDLCLive.md +++ b/docs/recipes/MegaDetectorDLCLive.md @@ -102,7 +102,7 @@ We encourage you to try out and experiment on your camera trap or other animal i DLC -Or these lil' cuties πŸΆπŸΆπŸ™€πŸΆ outside a restaurant, from the [Twitter meme](https://twitter.com/standardpuppies/status/1563188163962515457?s=21&t=f2kM2HoUygyLmmAH7Ho-HQ). +Or these lil' cuties πŸΆπŸΆπŸ™€πŸΆ outside a restaurant. DLC diff --git a/docs/recipes/TechHardware.md b/docs/recipes/TechHardware.md index 879fc67dfb..094e426f24 100644 --- a/docs/recipes/TechHardware.md +++ b/docs/recipes/TechHardware.md @@ -11,7 +11,7 @@ For reference, we use e.g. Dell workstations (79xx series) with **Ubuntu 16.04 L ### Computer Hardware: -Ideally, you will use a strong GPU with *at least* 8GB memory such as the [NVIDIA GeForce 1080 Ti, 2080 Ti, or 3090](https://www.nvidia.com/en-us/shop/geforce/?page=1&limit=9&locale=en-us). A GPU is not strictly necessary, but on a CPU the (training and evaluation) code is considerably slower (10x) for ResNets, but MobileNets and EfficientNets are slightly faster. Still, a GPU will give you a massive speed boost. You might also consider using cloud computing services like [Google cloud/amazon web services](https://github.com/DeepLabCut/DeepLabCut/issues/47) or Google Colaboratory. +Ideally, you will use a strong GPU with *at least* 8GB memory such as the [NVIDIA GeForce 1080 Ti, 2080 Ti, or 3090](hhttps://marketplace.nvidia.com/en-us/consumer/graphics-cards/). A GPU is not strictly necessary, but on a CPU the (training and evaluation) code is considerably slower (10x) for ResNets, but MobileNets and EfficientNets are slightly faster. Still, a GPU will give you a massive speed boost. You might also consider using cloud computing services like [Google cloud/amazon web services](https://github.com/DeepLabCut/DeepLabCut/issues/47) or Google Colaboratory. ### Camera Hardware: From 2cf4b70c3c2f597b240a8db19d754e4abebb126f Mon Sep 17 00:00:00 2001 From: Jaap de Ruyter Date: Thu, 13 Nov 2025 16:41:40 +0100 Subject: [PATCH 2/4] fix broken links in example notebooks --- examples/COLAB/COLAB_3miceDemo.ipynb | 2 +- ...ATA_maDLC_TrainNetwork_VideoAnalysis.ipynb | 2 +- examples/COLAB/COLAB_transformer_reID.ipynb | 132 +++++++++--------- 3 files changed, 68 insertions(+), 68 deletions(-) diff --git a/examples/COLAB/COLAB_3miceDemo.ipynb b/examples/COLAB/COLAB_3miceDemo.ipynb index 52ad292bbc..052cd6ed78 100644 --- a/examples/COLAB/COLAB_3miceDemo.ipynb +++ b/examples/COLAB/COLAB_3miceDemo.ipynb @@ -33,7 +33,7 @@ "- To create a full maDLC pipeline please see our full docs: https://deeplabcut.github.io/DeepLabCut/README.html\n", "\n", "- Of interest is a full how-to for maDLC: https://deeplabcut.github.io/DeepLabCut/docs/maDLC_UserGuide.html\n", - "- a quick guide to maDLC: https://deeplabcut.github.io/DeepLabCut/docs/tutorial.html\n", + "- a quick guide to maDLC: https://deeplabcut.github.io/DeepLabCut/docs/quick-start/tutorial_maDLC.html\n", "- a demo COLAB for how to use maDLC on your own data: https://github.com/DeepLabCut/DeepLabCut/blob/main/examples/COLAB/COLAB_maDLC_TrainNetwork_VideoAnalysis.ipynb\n", "\n", "### To get started, please go to \"Runtime\" ->\"change runtime type\"->select \"Python3\", and then select \"GPU\"" diff --git a/examples/COLAB/COLAB_YOURDATA_maDLC_TrainNetwork_VideoAnalysis.ipynb b/examples/COLAB/COLAB_YOURDATA_maDLC_TrainNetwork_VideoAnalysis.ipynb index 1f64136a18..b8c060960d 100644 --- a/examples/COLAB/COLAB_YOURDATA_maDLC_TrainNetwork_VideoAnalysis.ipynb +++ b/examples/COLAB/COLAB_YOURDATA_maDLC_TrainNetwork_VideoAnalysis.ipynb @@ -7,7 +7,7 @@ "id": "view-in-github" }, "source": [ - "\"Open" + "\"Open" ] }, { diff --git a/examples/COLAB/COLAB_transformer_reID.ipynb b/examples/COLAB/COLAB_transformer_reID.ipynb index 03f7df8a32..008255692f 100644 --- a/examples/COLAB/COLAB_transformer_reID.ipynb +++ b/examples/COLAB/COLAB_transformer_reID.ipynb @@ -3,8 +3,8 @@ { "cell_type": "markdown", "metadata": { - "id": "view-in-github", - "colab_type": "text" + "colab_type": "text", + "id": "view-in-github" }, "source": [ "\"Open" @@ -29,20 +29,20 @@ "\n", "### To create a full maDLC pipeline please see our full docs: https://deeplabcut.github.io/DeepLabCut/README.html\n", "- Of interest is a full how-to for maDLC: https://deeplabcut.github.io/DeepLabCut/docs/maDLC_UserGuide.html\n", - "- a quick guide to maDLC: https://deeplabcut.github.io/DeepLabCut/docs/tutorial.html\n", - "- a demo COLAB for how to use maDLC on your own data: https://github.com/DeepLabCut/DeepLabCut/blob/main/examples/COLAB/COLAB_maDLC_TrainNetwork_VideoAnalysis.ipynb\n", + "- a quick guide to maDLC: https://deeplabcut.github.io/DeepLabCut/docs/quick-start/tutorial_maDLC.html\n", + "- a demo COLAB for how to use maDLC on your own data: https://github.com/DeepLabCut/DeepLabCut/blob/main/examples/COLAB/COLAB_YOURDATA_maDLC_TrainNetwork_VideoAnalysis.ipynb\n", "\n", "### To get started, please go to \"Runtime\" ->\"change runtime type\"->select \"Python3\", and then select \"GPU\"\n" ] }, { "cell_type": "markdown", - "source": [ - "‼️ **Attention: this demo is for maDLC, which is version 2.2**\n" - ], "metadata": { "id": "xOe2hvy85EVP" - } + }, + "source": [ + "‼️ **Attention: this demo is for maDLC, which is version 2.2**\n" + ] }, { "cell_type": "code", @@ -58,15 +58,15 @@ }, { "cell_type": "code", - "source": [ - "import deeplabcut\n", - "import os" - ], + "execution_count": 3, "metadata": { "id": "TlhrVFKN8euh" }, - "execution_count": 3, - "outputs": [] + "outputs": [], + "source": [ + "import deeplabcut\n", + "import os" + ] }, { "cell_type": "markdown", @@ -87,16 +87,16 @@ "cell_type": "code", "execution_count": 5, "metadata": { - "id": "PusLdqbqJi60", "colab": { "base_uri": "https://localhost:8080/" }, + "id": "PusLdqbqJi60", "outputId": "dbe30821-d3a7-443f-de74-6cb0bee49aac" }, "outputs": [ { - "output_type": "stream", "name": "stdout", + "output_type": "stream", "text": [ "Downloading demo-me-2021-07-14.zip...\n" ] @@ -147,11 +147,7 @@ }, { "cell_type": "code", - "source": [ - "deeplabcut.analyze_videos(config_path,[video],\n", - " shuffle=0, videotype=\"mp4\",\n", - " auto_track=True)" - ], + "execution_count": 7, "metadata": { "colab": { "base_uri": "https://localhost:8080/", @@ -160,26 +156,25 @@ "id": "U_351Hkv81X-", "outputId": "f7c30461-101f-47b6-c04f-15809aa5a4bb" }, - "execution_count": 7, "outputs": [ { - "output_type": "stream", "name": "stdout", + "output_type": "stream", "text": [ "Using snapshot-20000 for model /content/demo-me-2021-07-14/dlc-models/iteration-0/demoJul14-trainset95shuffle0\n" ] }, { - "output_type": "stream", "name": "stderr", + "output_type": "stream", "text": [ "/usr/local/lib/python3.11/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py:1694: UserWarning: `layer.apply` is deprecated and will be removed in a future version. Please use `layer.__call__` method instead.\n", " warnings.warn('`layer.apply` is deprecated and '\n" ] }, { - "output_type": "stream", "name": "stdout", + "output_type": "stream", "text": [ "Activating extracting of PAFs\n", "Starting to analyze % /content/demo-me-2021-07-14/videos/videocompressed1.mp4\n", @@ -190,30 +185,30 @@ ] }, { - "output_type": "stream", "name": "stderr", + "output_type": "stream", "text": [ "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2330/2330 [00:39<00:00, 58.83it/s]\n" ] }, { - "output_type": "stream", "name": "stdout", + "output_type": "stream", "text": [ "Video Analyzed. Saving results in /content/demo-me-2021-07-14/videos...\n" ] }, { - "output_type": "stream", "name": "stderr", + "output_type": "stream", "text": [ "/usr/local/lib/python3.11/dist-packages/deeplabcut/utils/auxfun_multianimal.py:83: UserWarning: default_track_method` is undefined in the config.yaml file and will be set to `ellipse`.\n", " warnings.warn(\n" ] }, { - "output_type": "stream", "name": "stdout", + "output_type": "stream", "text": [ "Using snapshot-20000 for model /content/demo-me-2021-07-14/dlc-models/iteration-0/demoJul14-trainset95shuffle0\n", "Processing... /content/demo-me-2021-07-14/videos/videocompressed1.mp4\n", @@ -221,24 +216,24 @@ ] }, { - "output_type": "stream", "name": "stderr", + "output_type": "stream", "text": [ "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2330/2330 [00:02<00:00, 1088.72it/s]\n", "2330it [00:06, 342.29it/s] \n" ] }, { - "output_type": "stream", "name": "stdout", + "output_type": "stream", "text": [ "The tracklets were created (i.e., under the hood deeplabcut.convert_detections2tracklets was run). Now you can 'refine_tracklets' in the GUI, or run 'deeplabcut.stitch_tracklets'.\n", "Processing... /content/demo-me-2021-07-14/videos/videocompressed1.mp4\n" ] }, { - "output_type": "stream", "name": "stderr", + "output_type": "stream", "text": [ "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4/4 [00:00<00:00, 1488.53it/s]\n", "/usr/local/lib/python3.11/dist-packages/deeplabcut/refine_training_dataset/stitch.py:934: FutureWarning: Starting with pandas version 3.0 all arguments of to_hdf except for the argument 'path_or_buf' will be keyword-only.\n", @@ -246,8 +241,8 @@ ] }, { - "output_type": "stream", "name": "stdout", + "output_type": "stream", "text": [ "The videos are analyzed. Time to assemble animals and track 'em... \n", " Call 'create_video_with_all_detections' to check multi-animal detection quality before tracking.\n", @@ -255,18 +250,23 @@ ] }, { - "output_type": "execute_result", "data": { - "text/plain": [ - "'DLC_dlcrnetms5_demoJul14shuffle0_20000'" - ], "application/vnd.google.colaboratory.intrinsic+json": { "type": "string" - } + }, + "text/plain": [ + "'DLC_dlcrnetms5_demoJul14shuffle0_20000'" + ] }, + "execution_count": 7, "metadata": {}, - "execution_count": 7 + "output_type": "execute_result" } + ], + "source": [ + "deeplabcut.analyze_videos(config_path,[video],\n", + " shuffle=0, videotype=\"mp4\",\n", + " auto_track=True)" ] }, { @@ -291,32 +291,32 @@ "cell_type": "code", "execution_count": 8, "metadata": { - "id": "aTRbuUQ1FBO0", "colab": { "base_uri": "https://localhost:8080/" }, + "id": "aTRbuUQ1FBO0", "outputId": "0d182f64-512d-463d-a997-226c7199b724" }, "outputs": [ { - "output_type": "stream", "name": "stdout", + "output_type": "stream", "text": [ "Filtering with median model /content/demo-me-2021-07-14/videos/videocompressed1.mp4\n", "Saving filtered csv poses!\n" ] }, { - "output_type": "stream", "name": "stderr", + "output_type": "stream", "text": [ "/usr/local/lib/python3.11/dist-packages/deeplabcut/post_processing/filtering.py:298: FutureWarning: Starting with pandas version 3.0 all arguments of to_hdf except for the argument 'path_or_buf' will be keyword-only.\n", " data.to_hdf(outdataname, \"df_with_missing\", format=\"table\", mode=\"w\")\n" ] }, { - "output_type": "stream", "name": "stdout", + "output_type": "stream", "text": [ "Starting to process video: /content/demo-me-2021-07-14/videos/videocompressed1.mp4\n", "Loading /content/demo-me-2021-07-14/videos/videocompressed1.mp4 and data.\n", @@ -326,8 +326,8 @@ ] }, { - "output_type": "stream", "name": "stderr", + "output_type": "stream", "text": [ "/usr/local/lib/python3.11/dist-packages/deeplabcut/utils/make_labeled_video.py:140: FutureWarning: DataFrame.groupby with axis=1 is deprecated. Do `frame.T.groupby(...)` without axis instead.\n", " Dataframe.groupby(level=\"individuals\", axis=1).size().values // 3\n", @@ -335,14 +335,14 @@ ] }, { - "output_type": "execute_result", "data": { "text/plain": [ "[True]" ] }, + "execution_count": 8, "metadata": {}, - "execution_count": 8 + "output_type": "execute_result" } ], "source": [ @@ -385,16 +385,16 @@ "cell_type": "code", "execution_count": 9, "metadata": { - "id": "7w9BDIA7BB_i", "colab": { "base_uri": "https://localhost:8080/" }, + "id": "7w9BDIA7BB_i", "outputId": "a163087d-cbcb-4e4d-f461-2e24ed19a80b" }, "outputs": [ { - "output_type": "stream", "name": "stdout", + "output_type": "stream", "text": [ "Loading /content/demo-me-2021-07-14/videos/videocompressed1.mp4 and data.\n", "Plots created! Please check the directory \"plot-poses\" within the video directory\n" @@ -420,23 +420,23 @@ "cell_type": "code", "execution_count": 10, "metadata": { - "id": "5xlO6TVYxQWc", "colab": { "base_uri": "https://localhost:8080/" }, + "id": "5xlO6TVYxQWc", "outputId": "a433221f-0390-4028-fe68-be0b90adad48" }, "outputs": [ { - "output_type": "stream", "name": "stdout", + "output_type": "stream", "text": [ "Using snapshot-20000 for model /content/demo-me-2021-07-14/dlc-models/iteration-0/demoJul14-trainset95shuffle0\n" ] }, { - "output_type": "stream", "name": "stderr", + "output_type": "stream", "text": [ "/usr/local/lib/python3.11/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py:1694: UserWarning: `layer.apply` is deprecated and will be removed in a future version. Please use `layer.__call__` method instead.\n", " warnings.warn('`layer.apply` is deprecated and '\n", @@ -449,8 +449,8 @@ ] }, { - "output_type": "stream", "name": "stdout", + "output_type": "stream", "text": [ "Activating extracting of PAFs\n", "Starting to analyze % /content/demo-me-2021-07-14/videos/videocompressed1.mp4\n", @@ -461,15 +461,15 @@ ] }, { - "output_type": "stream", "name": "stderr", + "output_type": "stream", "text": [ "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2330/2330 [01:18<00:00, 29.78it/s]\n" ] }, { - "output_type": "stream", "name": "stdout", + "output_type": "stream", "text": [ "If the tracking is not satisfactory for some videos, consider expanding the training set. You can use the function 'extract_outlier_frames' to extract a few representative outlier frames.\n", "Epoch 10, train acc: 0.61\n", @@ -497,8 +497,8 @@ ] }, { - "output_type": "stream", "name": "stderr", + "output_type": "stream", "text": [ "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4/4 [00:00<00:00, 483.21it/s]\n", "/usr/local/lib/python3.11/dist-packages/deeplabcut/refine_training_dataset/stitch.py:934: FutureWarning: Starting with pandas version 3.0 all arguments of to_hdf except for the argument 'path_or_buf' will be keyword-only.\n", @@ -530,16 +530,16 @@ "cell_type": "code", "execution_count": 11, "metadata": { - "id": "MBMbRFEMxmi4", "colab": { "base_uri": "https://localhost:8080/" }, + "id": "MBMbRFEMxmi4", "outputId": "5ca4357a-c8e1-46c6-ecad-141bfce48cc5" }, "outputs": [ { - "output_type": "stream", "name": "stdout", + "output_type": "stream", "text": [ "Loading /content/demo-me-2021-07-14/videos/videocompressed1.mp4 and data.\n", "Plots created! Please check the directory \"plot-poses\" within the video directory\n" @@ -560,16 +560,16 @@ "cell_type": "code", "execution_count": 12, "metadata": { - "id": "vx3e-r1CoXaX", "colab": { "base_uri": "https://localhost:8080/" }, + "id": "vx3e-r1CoXaX", "outputId": "46cdbd39-d1f6-4b78-abba-7e979740f2a2" }, "outputs": [ { - "output_type": "stream", "name": "stdout", + "output_type": "stream", "text": [ "Starting to process video: /content/demo-me-2021-07-14/videos/videocompressed1.mp4\n", "Loading /content/demo-me-2021-07-14/videos/videocompressed1.mp4 and data.\n", @@ -579,8 +579,8 @@ ] }, { - "output_type": "stream", "name": "stderr", + "output_type": "stream", "text": [ "/usr/local/lib/python3.11/dist-packages/deeplabcut/utils/make_labeled_video.py:140: FutureWarning: DataFrame.groupby with axis=1 is deprecated. Do `frame.T.groupby(...)` without axis instead.\n", " Dataframe.groupby(level=\"individuals\", axis=1).size().values // 3\n", @@ -588,14 +588,14 @@ ] }, { - "output_type": "execute_result", "data": { "text/plain": [ "[True]" ] }, + "execution_count": 12, "metadata": {}, - "execution_count": 12 + "output_type": "execute_result" } ], "source": [ @@ -615,11 +615,11 @@ "metadata": { "accelerator": "GPU", "colab": { - "name": "COLAB_transformer_reID.ipynb", - "provenance": [], - "machine_shape": "hm", "gpuType": "A100", - "include_colab_link": true + "include_colab_link": true, + "machine_shape": "hm", + "name": "COLAB_transformer_reID.ipynb", + "provenance": [] }, "kernelspec": { "display_name": "Python 3", From 96cc0f75a779d4bc8bc732bf06b5b2bae6d3ec20 Mon Sep 17 00:00:00 2001 From: Jaap de Ruyter Date: Thu, 13 Nov 2025 17:00:15 +0100 Subject: [PATCH 3/4] fix typo in URL --- docs/recipes/TechHardware.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/recipes/TechHardware.md b/docs/recipes/TechHardware.md index 094e426f24..6fb1add9bc 100644 --- a/docs/recipes/TechHardware.md +++ b/docs/recipes/TechHardware.md @@ -11,7 +11,7 @@ For reference, we use e.g. Dell workstations (79xx series) with **Ubuntu 16.04 L ### Computer Hardware: -Ideally, you will use a strong GPU with *at least* 8GB memory such as the [NVIDIA GeForce 1080 Ti, 2080 Ti, or 3090](hhttps://marketplace.nvidia.com/en-us/consumer/graphics-cards/). A GPU is not strictly necessary, but on a CPU the (training and evaluation) code is considerably slower (10x) for ResNets, but MobileNets and EfficientNets are slightly faster. Still, a GPU will give you a massive speed boost. You might also consider using cloud computing services like [Google cloud/amazon web services](https://github.com/DeepLabCut/DeepLabCut/issues/47) or Google Colaboratory. +Ideally, you will use a strong GPU with *at least* 8GB memory such as the [NVIDIA GeForce 1080 Ti, 2080 Ti, or 3090](https://marketplace.nvidia.com/en-us/consumer/graphics-cards/). A GPU is not strictly necessary, but on a CPU the (training and evaluation) code is considerably slower (10x) for ResNets, but MobileNets and EfficientNets are slightly faster. Still, a GPU will give you a massive speed boost. You might also consider using cloud computing services like [Google cloud/amazon web services](https://github.com/DeepLabCut/DeepLabCut/issues/47) or Google Colaboratory. ### Camera Hardware: From 46d6d261a90fade2b11748ca7a0f1ce198223af6 Mon Sep 17 00:00:00 2001 From: Jaap de Ruyter Date: Fri, 14 Nov 2025 09:21:09 +0100 Subject: [PATCH 4/4] fix broken links in README --- README.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index a097f35fed..1f50ff77ee 100644 --- a/README.md +++ b/README.md @@ -47,7 +47,7 @@ [![Percentage of issues still open](http://isitmaintained.com/badge/open/deeplabcut/deeplabcut.svg)](http://isitmaintained.com/project/deeplabcut/deeplabcut "Percentage of issues still open") [![Image.sc forum](https://img.shields.io/badge/dynamic/json.svg?label=forum&url=https%3A%2F%2Fforum.image.sc%2Ftag%2Fdeeplabcut.json&query=%24.topic_list.tags.0.topic_count&colorB=brightgreen&&suffix=%20topics&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAYAAAAfSC3RAAABPklEQVR42m3SyyqFURTA8Y2BER0TDyExZ+aSPIKUlPIITFzKeQWXwhBlQrmFgUzMMFLKZeguBu5y+//17dP3nc5vuPdee6299gohUYYaDGOyyACq4JmQVoFujOMR77hNfOAGM+hBOQqB9TjHD36xhAa04RCuuXeKOvwHVWIKL9jCK2bRiV284QgL8MwEjAneeo9VNOEaBhzALGtoRy02cIcWhE34jj5YxgW+E5Z4iTPkMYpPLCNY3hdOYEfNbKYdmNngZ1jyEzw7h7AIb3fRTQ95OAZ6yQpGYHMMtOTgouktYwxuXsHgWLLl+4x++Kx1FJrjLTagA77bTPvYgw1rRqY56e+w7GNYsqX6JfPwi7aR+Y5SA+BXtKIRfkfJAYgj14tpOF6+I46c4/cAM3UhM3JxyKsxiOIhH0IO6SH/A1Kb1WBeUjbkAAAAAElFTkSuQmCC)](https://forum.image.sc/tag/deeplabcut) [![Gitter](https://badges.gitter.im/DeepLabCut/community.svg)](https://gitter.im/DeepLabCut/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) -[![Twitter Follow](https://img.shields.io/twitter/follow/DeepLabCut.svg?label=DeepLabCut&style=social)](https://twitter.com/DeepLabCut) +[![Twitter Follow](https://img.shields.io/twitter/follow/DeepLabCut.svg?label=DeepLabCut&style=social)](https://x.com/DeepLabCut) [![Generic badge](https://img.shields.io/badge/Contributions-Welcome-brightgreen.svg)](CONTRIBUTING.md) [![CZI's Essential Open Source Software for Science](https://chanzuckerberg.github.io/open-science/badges/CZI-EOSS.svg)](https://czi.co/EOSS) @@ -95,7 +95,7 @@ We recommend using our conda file, see [here](https://github.com/DeepLabCut/Deep Our docs walk you through using DeepLabCut, and key API points. For an overview of the toolbox and workflow for project management, see our step-by-step at [Nature Protocols paper](https://doi.org/10.1038/s41596-019-0176-0). -For a deeper understanding and more resources for you to get started with Python and DeepLabCut, please check out our free online course! http://DLCcourse.deeplabcut.org +For a deeper understanding and more resources for you to get started with Python and DeepLabCut, please check out our free online course! https://deeplabcut.github.io/DeepLabCut/docs/course.html

@@ -177,7 +177,7 @@ DeepLabCut is an open-source tool and has benefited from suggestions and edits b | [![Image.sc forum](https://img.shields.io/badge/dynamic/json.svg?label=forum&url=https%3A%2F%2Fforum.image.sc%2Ftag%2Fdeeplabcut.json&query=%24.topic_list.tags.0.topic_count&colorB=brightgreen&&suffix=%20topics&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAYAAAAfSC3RAAABPklEQVR42m3SyyqFURTA8Y2BER0TDyExZ+aSPIKUlPIITFzKeQWXwhBlQrmFgUzMMFLKZeguBu5y+//17dP3nc5vuPdee6299gohUYYaDGOyyACq4JmQVoFujOMR77hNfOAGM+hBOQqB9TjHD36xhAa04RCuuXeKOvwHVWIKL9jCK2bRiV284QgL8MwEjAneeo9VNOEaBhzALGtoRy02cIcWhE34jj5YxgW+E5Z4iTPkMYpPLCNY3hdOYEfNbKYdmNngZ1jyEzw7h7AIb3fRTQ95OAZ6yQpGYHMMtOTgouktYwxuXsHgWLLl+4x++Kx1FJrjLTagA77bTPvYgw1rRqY56e+w7GNYsqX6JfPwi7aR+Y5SA+BXtKIRfkfJAYgj14tpOF6+I46c4/cAM3UhM3JxyKsxiOIhH0IO6SH/A1Kb1WBeUjbkAAAAAElFTkSuQmCC)](https://forum.image.sc/tag/deeplabcut)
🐭Tag: DeepLabCut | To ask help and support questions πŸ‘‹ | PromptlyπŸ”₯ | The DLC Community | |[![Gitter](https://badges.gitter.im/DeepLabCut/community.svg)](https://gitter.im/DeepLabCut/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) | To discuss with other users, share ideas and collaborateπŸ’‘ | 2-5 days | The DLC Community | | [BluSkyπŸ¦‹](https://bsky.app/profile/deeplabcut.bsky.social) | To keep up with our latest news and updates πŸ“’ | 2-5 days | DLC Team | -| [![Twitter Follow](https://img.shields.io/twitter/follow/DeepLabCut.svg?label=DeepLabCut&style=social)](https://twitter.com/DeepLabCut) | To keep up with our latest news and updates πŸ“’ | 2-5 days | DLC Team | +| [![Twitter Follow](https://img.shields.io/twitter/follow/DeepLabCut.svg?label=DeepLabCut&style=social)](https://x.com/DeepLabCut) | To keep up with our latest news and updates πŸ“’ | 2-5 days | DLC Team | | The DeepLabCut [AI Residency Program](https://www.deeplabcutairesidency.org/) | To come and work with us next summerπŸ‘ | Annually | DLC Team | @@ -202,7 +202,7 @@ VERSION 2.3: Model Zoo SuperAnimals, and a whole new GUI experience. VERSION 2.2: Multi-animal pose estimation, identification, and tracking with DeepLabCut is supported (as well as single-animal projects). VERSION 2.0-2.1: This is the **Python package** of [DeepLabCut](https://www.nature.com/articles/s41593-018-0209-y) that was originally released in Oct 2018 with our [Nature Protocols](https://doi.org/10.1038/s41596-019-0176-0) paper (preprint [here](https://www.biorxiv.org/content/10.1101/476531v1)). -This package includes graphical user interfaces to label your data, and take you from data set creation to automatic behavioral analysis. It also introduces an active learning framework to efficiently use DeepLabCut on large experimental projects, and data augmentation tools that improve network performance, especially in challenging cases (see [panel b](https://camo.githubusercontent.com/77c92f6b89d44ca758d815bdd7e801247437060b/68747470733a2f2f737461746963312e73717561726573706163652e636f6d2f7374617469632f3537663664353163396637343536366635356563663237312f742f3563336663316336373538643436393530636537656563372f313534373638323338333539352f636865657461682e706e673f666f726d61743d37353077)). +This package includes graphical user interfaces to label your data, and take you from data set creation to automatic behavioral analysis. It also introduces an active learning framework to efficiently use DeepLabCut on large experimental projects, and data augmentation tools that improve network performance, especially in challenging cases. VERSION 1.0: The initial, Nature Neuroscience version of [DeepLabCut](https://www.nature.com/articles/s41593-018-0209-y) can be found in the history of git, or here: https://github.com/DeepLabCut/DeepLabCut/releases/tag/1.11 @@ -210,7 +210,7 @@ VERSION 1.0: The initial, Nature Neuroscience version of [DeepLabCut](https://ww :purple_heart: We released a major update, moving from 2.x --> 3.x with the backend change to PyTorch -:purple_heart: The DeepLabCut Model Zoo launches SuperAnimals, see more [here](http://www.mackenziemathislab.org/dlc-modelzoo/). +:purple_heart: The DeepLabCut Model Zoo launches SuperAnimals, see more [here](https://deeplabcut.github.io/DeepLabCut/docs/ModelZoo.html). :purple_heart: **DeepLabCut supports multi-animal pose estimation!** maDLC is out of beta/rc mode and beta is deprecated, thanks to the testers out there for feedback! Your labeled data will be backwards compatible, but not all other steps. Please see the [new `2.2+` releases](https://github.com/DeepLabCut/DeepLabCut/releases) for what's new & how to install it, please see our new [paper, Lauer et al 2022](https://www.nature.com/articles/s41592-022-01443-0), and the [new docs]( https://deeplabcut.github.io/DeepLabCut) on how to use it! @@ -240,7 +240,7 @@ VERSION 1.0: The initial, Nature Neuroscience version of [DeepLabCut](https://ww - Oct 2019: DLC 2.1 released with lots of updates. In particular, a Project Manager GUI, MobileNetsV2, and augmentation packages (Imgaug and Tensorpack). For detailed updates see [releases](https://github.com/DeepLabCut/DeepLabCut/releases) - Sept 2019: We published two preprints. One showing that [ImageNet pretraining contributes to robustness](https://arxiv.org/abs/1909.11229) and a [review on animal pose estimation](https://arxiv.org/abs/1909.13868). Check them out! - Jun 2019: DLC 2.0.7 released with lots of updates. For updates see [releases](https://github.com/DeepLabCut/DeepLabCut/releases) -- Feb 2019: DeepLabCut joined [twitter](https://twitter.com/deeplabcut) [![Twitter Follow](https://img.shields.io/twitter/follow/DeepLabCut.svg?label=DeepLabCut&style=social)](https://twitter.com/DeepLabCut) +- Feb 2019: DeepLabCut joined [twitter](https://x.com/deeplabcut) [![Twitter Follow](https://img.shields.io/twitter/follow/DeepLabCut.svg?label=DeepLabCut&style=social)](https://x.com/DeepLabCut) - Jan 2019: We hosted workshops for DLC in Warsaw, Munich and Cambridge. The materials are available [here](https://github.com/DeepLabCut/DeepLabCut-Workshop-Materials) - Jan 2019: We joined the Image Source Forum for user help: [![Image.sc forum](https://img.shields.io/badge/dynamic/json.svg?label=forum&url=https%3A%2F%2Fforum.image.sc%2Ftag%2Fdeeplabcut.json&query=%24.topic_list.tags.0.topic_count&colorB=brightgreen&&suffix=%20topics&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAYAAAAfSC3RAAABPklEQVR42m3SyyqFURTA8Y2BER0TDyExZ+aSPIKUlPIITFzKeQWXwhBlQrmFgUzMMFLKZeguBu5y+//17dP3nc5vuPdee6299gohUYYaDGOyyACq4JmQVoFujOMR77hNfOAGM+hBOQqB9TjHD36xhAa04RCuuXeKOvwHVWIKL9jCK2bRiV284QgL8MwEjAneeo9VNOEaBhzALGtoRy02cIcWhE34jj5YxgW+E5Z4iTPkMYpPLCNY3hdOYEfNbKYdmNngZ1jyEzw7h7AIb3fRTQ95OAZ6yQpGYHMMtOTgouktYwxuXsHgWLLl+4x++Kx1FJrjLTagA77bTPvYgw1rRqY56e+w7GNYsqX6JfPwi7aR+Y5SA+BXtKIRfkfJAYgj14tpOF6+I46c4/cAM3UhM3JxyKsxiOIhH0IO6SH/A1Kb1WBeUjbkAAAAAElFTkSuQmCC)](https://forum.image.sc/tag/deeplabcut)