Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
24a3cc9
addding new folder
fcendra Jun 25, 2020
b9d78e8
rename file from dronekit.py to pixhawk,py
fcendra Jun 26, 2020
f7ecef6
updated version
fcendra Jun 26, 2020
71adcfc
new commit
fcendra Jun 26, 2020
630a21f
new commit
fcendra Jun 26, 2020
e33d01b
Adds names for the different classes
utkarsh867 Jun 26, 2020
550a5a9
Makes an abstraction for the YOLO object detection code
utkarsh867 Jun 26, 2020
0e24329
Adds arguments for files
utkarsh867 Jun 26, 2020
a2d0dde
Refactors code for optimised imports
utkarsh867 Jun 26, 2020
55ff556
Refactors code for optimised imports
utkarsh867 Jun 26, 2020
f49c7a5
Adds deleted weights from the repo
utkarsh867 Jun 26, 2020
5f7b8a3
Updates README
utkarsh867 Jun 26, 2020
5bb04c2
add some features
fcendra Jun 27, 2020
1b472dd
yolo.py functions has been added to main.py
fcendra Jun 27, 2020
dabbb66
Adding some features in report.py
fcendra Jun 29, 2020
76399c8
New features
fcendra Jul 2, 2020
28b5cbb
Adding features for thingspeak
fcendra Jul 3, 2020
f168fd0
Adding features (thinkspeak)
fcendra Jul 7, 2020
4a5a28b
Code refactoring, adds error handling, logging
utkarsh867 Jul 8, 2020
7b7e82a
Adds FPS counter, shows video on flag
utkarsh867 Jul 8, 2020
ba0dfcb
Adding some code in thingspeak.py
fcendra Jul 9, 2020
cf0ed33
Adding asyncio module
fcendra Jul 9, 2020
627447c
Implemented multiprocessing module (have not tested yet on jetson)
fcendra Jul 12, 2020
1b71f6b
Hide channel id and apikey for thinkspeak
fcendra Jul 12, 2020
d8fd1c5
set config.py into .gitignore
fcendra Jul 13, 2020
2d23201
test
fcendra Jul 13, 2020
3cb3496
add conifg.py into .gitignore
fcendra Jul 13, 2020
0b306ce
set up multiprocess moduleinto main.py
fcendra Jul 13, 2020
108d0f1
Adds tiny weights to the project
utkarsh867 Jul 13, 2020
125479b
Major code refactoring
utkarsh867 Jul 13, 2020
31fe84c
Merge pull request #3 from clearbothk/fernando
utkarsh867 Jul 13, 2020
9a3299b
Refactoring, Adds string parsing for payload
utkarsh867 Jul 14, 2020
b2e3d0b
Refactoring Report functionality code
utkarsh867 Jul 14, 2020
2e005f9
Fixes a bug with pixhawk instance
utkarsh867 Jul 14, 2020
400695e
Optimized pixhawk somehow
fcendra Jul 14, 2020
0018587
Merge branch 'test' of https://github.com/clearbothk/botmlcode into test
fcendra Jul 14, 2020
1caedc4
Migrated to environment variables for ThinkSpeak API keys
fcendra Jul 15, 2020
1db16f6
Removes unused import in pixhawk
utkarsh867 Jul 15, 2020
661e0e4
Adds exception handling code for vehicle connection
utkarsh867 Jul 15, 2020
f2bc14d
Adds heading for Pixhawk documentation
utkarsh867 Jul 15, 2020
9ffb887
Updates the tiny-YOLO model after mid-training
utkarsh867 Jul 15, 2020
9101eb2
refactors pixhawk instance as pixhawk
utkarsh867 Jul 15, 2020
3cb5374
updated README for setup on the Jetson Nano board for pixhawk
fcendra Jul 15, 2020
aabdffa
Added Dronekit Pixhawk Connection in README
fcendra Jul 16, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,6 @@
.vscode/
.venv/
.idea/
__pycache__/
.DS_Store
config.py
47 changes: 38 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,15 +48,7 @@ Now, for the last step, make sure that you have `.venv` active using the command

If the above runs without errors, you have installed things correctly.

#### Running the code on the Jetson Nano for detection

To run the detection, use the commands:

```bash
python yolo_object_detection.py -y model
```

This should pull up a screen with the live feed from the camera.
### Misc instructions if you have not compiled OpenCV yet

#### OpenCV compile CMake

Expand All @@ -78,3 +70,40 @@ cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D OPENCV_ENABLE_NONFREE=ON \
-D OPENCV_EXTRA_MODULES_PATH=/home/`whoami`/opencv_contrib/modules ..
```


#### Setup on the Jetson Nano board for Pixhawk

we are using [DroneKit-Python API](https://dronekit-python.readthedocs.io/en/latest/about/overview.html) as an Onboard app between Jetson Nano and Pixhawk.

Make sure your linux userid has the permission to use your tty port device

connection port = `dev/ttyTHS1`

Assume our userid is `user`
```bash
sudo usermod -a -G dialout user
```
let's try running `testing.py` to get a brief introduction with `Dronekit` ( in `botmlcode/` directory )

```bash
python testing.py
```
we are aware that we need to wait for around `10 seconds` or more to get the above's print statement be executed. At first, we though this was an issue( listed below )
* Note ( 14th July, 2020): Optimise Pixhawk integration to the Jetson #5 [Track the issue here](https://github.com/clearbothk/botmlcode/issues/5)

In `pixhawk.py` script, below is the line code to establish Dronekit connectivity to the connected device. it is recommended to set [wait_ready=True](https://dronekit-python.readthedocs.io/en/latest/guide/connecting_vehicle.html) to waits until some vehicle parameters and attributes are populated so that it is initialized successfully.

```python
def __init__(self, connection_port="/dev/ttyTHS1", baud=57600):
try:
self.vehicle = connect(connection_port, wait_ready=True, baud=baud)
self.vehicle.mode = VehicleMode("MANUAL")
except serialutil.SerialException as e:
logging.error(e)
````
Thus we need to first initialize the `dronekit.connect()` and make it as a constructor rather than repeatedly run the scripts so that we do not need to re run the script for everytime the [Dronekit attributes functions](https://dronekit-python.readthedocs.io/en/latest/guide/vehicle_state_and_parameters.html)
get called.



Empty file added detector/__init__.py
Empty file.
123 changes: 123 additions & 0 deletions detector/detector.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
import numpy as np
import cv2
import os
import functools, operator

import logging


class Detector:
weights_file = None
config_file = None
names_file = None

confidence_threshold = 0
nms_threshold = 0
LABELS = []

net = None
ln = None

def __init__(self, model_path="model", use_gpu=False, confidence_thres=0.5, nms_thres=0.3,
weights_file="clearbot.weights", config_file="clearbot.cfg", names_file="clearbot.names"):

self.confidence_threshold = confidence_thres
self.nms_threshold = nms_thres

self.weights_file = os.path.sep.join([os.path.dirname(os.path.realpath(__file__)), model_path, weights_file])
self.config_file = os.path.sep.join([os.path.dirname(os.path.realpath(__file__)), model_path, config_file])
self.names_file = os.path.sep.join([os.path.dirname(os.path.realpath(__file__)), model_path, names_file])
logging.debug("Finished initialising model file paths")

try:
self.LABELS = open(self.names_file).read().strip().split("\n")
logging.debug(f"Loaded labels from the names file: \n{self.LABELS}")
except Exception as e:
logging.error(e)

try:
logging.debug("Loading Darknet model")
self.net = cv2.dnn.readNetFromDarknet(self.config_file, self.weights_file)
logging.debug("Finished loading Darknet model")
except Exception as e:
logging.error(e)

if use_gpu:
logging.info("Will try to use GPU backend")
self.net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)
self.net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA)
else:
logging.info("Using CPU only")

self.ln = self.net.getLayerNames()

unconnected_layers = functools.reduce(operator.iconcat, self.net.getUnconnectedOutLayers(), [])
logging.debug(f"Indexes of unconnected layers: {unconnected_layers}")
self.ln = [self.ln[i - 1] for i in unconnected_layers]
logging.debug(f"Output layers for YOLO are: {self.ln}")

def detect(self, frame):
"""
Detect the objects in the frame
:param frame: Frame that has been captured from the OpenCV video stream
:return: dict object with the objects and the bounding boxes
"""
(H, W) = frame.shape[:2]
blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416), swapRB=True, crop=False)

self.net.setInput(blob)
layer_outputs = self.net.forward(self.ln)

boxes = []
confidences = []
class_ids = []

for output in layer_outputs:
for detection in output:
scores = detection[5:]
class_id = np.argmax(scores)
confidence = scores[class_id]

if confidence > self.confidence_threshold:
box = detection[0:4] * np.array([W, H, W, H])
(centerX, centerY, width, height) = box.astype("int")

x = int(centerX - (width / 2))
y = int(centerY - (width / 2))

# Adding int(width) and int(height) is really important for some reason.
# Removing it gives an error in NMSBoxes() call
# Shall figure out soon and write a justification here.
boxes.append([x, y, int(width), int(height)])
confidences.append(float(confidence))
class_ids.append(class_id)

logging.debug((boxes, confidences, self.confidence_threshold, self.nms_threshold))
indexes = cv2.dnn.NMSBoxes(boxes, confidences, self.confidence_threshold, self.nms_threshold)
logging.debug(f"Indexes: {indexes}")
if len(indexes) > 0:
indexes = indexes.flatten()
return map(lambda idx: self.detected_to_result(boxes[idx], confidences[idx], class_ids[idx]), indexes)

def detected_to_result(self, box, confidence, class_id):
(x, y) = (box[0], box[1])
(w, h) = (box[2], box[3])

label = self.LABELS[class_id]

return {
"label": label,
"confidence": confidence,
"bbox": {
"x": x,
"y": y,
"width": w,
"height": h
}
}


if __name__ == "__main__":
logging.getLogger().setLevel(logging.DEBUG)

detector = Detector()
Loading