It is impractical to run VNC because of the tiny resolution, and ssh -X tunnelling because that requires the host to have whatever drivers are used on the jetson. GStreamer is working well though.
Cool.
I’m trying to run a python program on the feed. Ended up finding same issue elsewhere. RTP output not working. Bumped the thread.
Someone worked it out. Needed a do/while loop instead of a for loop.
It’s been a couple of months since I was here, and it’s now time to load up this gstreamer code again, and see if I can stream a webcam feed from the robot, and we evaluate inference on the trained CNN, and we colour in the different classes in pretty colours, and
So, TODO:
Get panoptic segmentation colour mapping combining the output layers of the CNN. So something like this. We get a live webcam feed, and we run the frames through the semantic segmentation CNN, and combine the binary masks into a multiclass mask.
Then gstream these rainbow multiclass masked images to a listening gstreamer video-viewer program
So, practically though, I need to set up the Jetson again, and get it compiling the trained h5 file, and using webcam stills as input. Then gstream the result. Ok.
And fix power cable for Jetson step 1. Then try set up webcam gstreaming.
Ok, it’s plugged in, nmap found it. Let’s log in… Ok, run gstreamer, and then i’ve got a folder, jetson-inference, and I’m volume mapping it.
import jetson.inference
import jetson.utils
net = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5)
camera = jetson.utils.videoSource("/dev/video0")
display = jetson.utils.videoOutput("rtp://192.168.101.127:1234","--headless") # 'my_video.mp4' for file
while True:
img = camera.Capture()
detections = net.Detect(img)
display.Render(img)
display.SetStatus("Object Detection | Network {:.0f} FPS".format(net.GetNetworkFPS()))
if not camera.IsStreaming() or not display.IsStreaming():
break
and my nice docker with TF2 and everything installed already says:
root@jetson:/dmc/jetson-inference/build/aarch64/bin# ./my-detection.py Traceback (most recent call last): File "./my-detection.py", line 24, in import jetson.inference ModuleNotFoundError: No module named 'jetson'
So what now… We open ‘/dev/video0’ for V4L2, then replace my inference code below.
img = camera.Capture()
detections = net.Detect(img)
display.Render(img)
Ok ... zoom ahead and it's all worked out...
---
After going on the ringer another time: zooming ahead, fixing that jetson issue, involves
cd jetson-inference/build
make install
ldconfig
---
Saved the sender program at the github
To get video to the screen
gst-launch-1.0 -v udpsrc port=1234 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! decodebin ! videoconvert ! autovideosink
or to a mp4
gst-launch-1.0 -v udpsrc port=1234 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! decodebin ! x264enc ! qtmux ! filesink location=test.mp4 -e
Looks like you can just run Tensorflow.js with COCO really easily.
import * as cocoSsd from "@tensorflow-models/coco-ssd";
const image = document.getElementById("image")
cocoSsd.load()
.then(model => model.detect(image))
.then(predictions => console.log(predictions))
But then you just get the categories COCO was trained on. Bird.
Turi Create simplifies the development of custom machine learning models. You don’t have to be a machine learning expert to add recommendations, object detection, image classification, image similarity or activity classification to your app.
Easy-to-use: Focus on tasks instead of algorithms
Visual: Built-in, streaming visualizations to explore your data
Flexible: Supports text, images, audio, video and sensor data
Fast and Scalable: Work with large datasets on a single machine
Ready To Deploy: Export models to Core ML for use in iOS, macOS, watchOS, and tvOS apps
With Turi Create, you can accomplish many common ML tasks:
Args: name (str): the name that identifies a dataset, e.g. "coco_2014_train". metadata (dict): extra metadata associated with this dataset. You can leave it as an empty dict. json_file (str): path to the json instance annotation file. image_root (str or path-like): directory which contains all the images.
Training Dataset: The sample of data used to fit the model.
Validation Dataset: The sample of data used to provide an unbiased evaluation of a model fit on the training dataset while tuning model hyperparameters. The evaluation becomes more biased as skill on the validation dataset is incorporated into the model configuration.
Test Dataset: The sample of data used to provide an unbiased evaluation of a final model fit on the training dataset.
Debugging…
To show an image with OpenCV, you need to follow it with cv2.waitKey()
As I don’t have an NVIDIA card, I needed to set cfg.MODEL.DEVICE=’cpu’
Got some “incompatible shapes” warnings – fair enough.
Since running on cpu, needed this environment variable setting to stop it from using too much memory
LRU_CACHE_CAPACITY=1 python3 eggid.py
Got one “training diverged” with 0.02 learning rate. Changed to 0.001. It freezes a lot. Ubuntu freezes if you use too much memory.
Ok it kept freezing. Going to have to try on Google Colab maybe, or maybe limit python’s memory use. But that would presumably just result in “Memory Error” instead, only slightly less annoying than the computer freezing.
Some guy did object detection, with bounding boxes: https://colab.research.google.com/drive/1BRiFBC06OmWNkH4VpPl8Sf7IT21w7vXr https://www.mrdbourke.com/airbnb-amenity-detection/
Ok, I tried again with Roboflow, but it seems they only support bounding box training, and not the segmentation training I want.
Let’s try training bounding box object detection on the egg dataset…
[09/18 22:53:15 d2.evaluation.coco_evaluation]: Preparing results for COCO format … [09/18 22:53:15 d2.evaluation.coco_evaluation]: Saving results to ./output/coco_instances_results.json [09/18 22:53:15 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API… Loading and preparing results… DONE (t=0.00s) creating index… index created! Running per image evaluation… Evaluate annotation type bbox COCOeval_opt.evaluate() finished in 0.00 seconds. Accumulating evaluation results… COCOeval_opt.accumulate() finished in 0.01 seconds. Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.595 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.857 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.528 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.501 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.340 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.559 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.469 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.642 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.642 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.500 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.362 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.633 [09/18 22:53:15 d2.evaluation.coco_evaluation]: Evaluation results for bbox:
So I think the training worked, perhaps, on the bounding boxes? Kinda hard to say without seeing it draw some boxes. Not entirely sure what these APs all mean, but are related to “Average Precision”: https://cocodataset.org/#detection-eval
So now, let’s do Google Open Images based training instead. It has a ‘Chicken’ subset, so that’s ideal. So I downloaded https://pypi.org/project/openimages/ and run some python:
from openimages.download import download_dataset
download_dataset("/media/chrx/0FEC49A4317DA4DA/openimages", ["Chicken"], annotation_format="pascal")
Ack this is only bounding boxes too.
Looks like https://pypi.org/project/oidv6/ is another open images downloader script.
Detectron2 needs COCO format, so converting from Pascal VOC to COCO… ?
I looked at this, https://github.com/roboflow-ai/voc2coco – nope, that’s bounding boxes only.
This looks like it might be the biggest format conversion app I’ve found, OpenVINO™ Toolkit
We can find the 'Chicken' category is represented by /m/09b5t:
wget https://storage.googleapis.com/openimages/v5/class-descriptions-boxable.csv
/m/09b5t,Chicken
I would prefer to get instance segmentation training working than bounding box training. But it looks like it’s gonna be a bit harder than anticipated.
At this point, we can download google open images, with some bounding box annotations in the OIDv6 format, and scale them down to 300×300 or similar. We can also get it in Pascal VOC format.
I’ve just set up a user on a friend’s server, and I followed the @nicolas.windt article.
Do I
a) try get Google Tensorflow’s object detection working, as described in @nicolas.windt’s article?
Traceback (most recent call last):
File "/home/danielb/work/models/research/object_detection/dataset_tools/create_oid_tf_record.py", line 45, in
from object_detection.dataset_tools import oid_tfrecord_creation
ImportError: No module named object_detection.dataset_tools
pip install tensorflow-object-detection-api
File "/home/danielb/work/models/research/object_detection/dataset_tools/create_oid_tf_record.py", line 110, in main
image_annotations, label_map, encoded_image)
File "/root/anaconda3/envs/tfRecords/lib/python2.7/site-packages/object_detection/dataset_tools/oid_tfrecord_creation.py", line 43, in tf_example_from_annotations_data_frame
annotations_data_frame.LabelName.isin(label_map)]
File "/root/anaconda3/envs/tfRecords/lib/python2.7/site-packages/pandas/core/generic.py", line 3614, in getattr
return object.getattribute(self, name)
AttributeError: 'DataFrame' object has no attribute 'LabelName'
This has to do with pandas not finding the format it wants.
---
I'm trying with python 3.8 now, and had to change as_matrix to to_numpy because it was deprecated, and had to change package names to tf.io.xxx
Now
File "/root/anaconda3/lib/python3.8/site-packages/object_detection/dataset_tools/oid_tfrecord_creation.py", line 71, in tf_example_from_annotations_data_frame
dataset_util.bytes_feature('{}.jpg'.format(image_id)),
File "/root/anaconda3/lib/python3.8/site-packages/object_detection/utils/dataset_util.py", line 30, in bytes_feature
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
TypeError: '000411001ff7dd4f.jpg' has type str, but expected one of: bytes
So it needs like a to-bytes sort of thing. [b'a', b'b'] is what stackoverflow came up with. So needs like [b'000411001ff7dd4f.jpg'] instead of ['000411001ff7dd4f.jpg'
"Convert string to bytes"
looks like
b = mystring.encode()
So,
def bytes_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
"Python string encoding is different in Python 2.7 vs 3.6 and it break Tensorflow."
"Hi, where i use encode() ?"
- in the https://github.com/tensorflow/models/issues/1597
ok... it's failing here:
standard_fields.TfExampleFields.filename: dataset_util.bytes_feature('{}.jpg'.format(image_id)),
ok and if i use value=value.encode()
TypeError: 48 has type int, but expected one of: bytes
(Ah, ASCII 48 is '0' from '000411001ff7dd4f', so not that.)
and value=[value.encode()] gets
AttributeError: 'bytes' object has no attribute 'encode'
...
but without .encode(),
TypeError: '000411001ff7dd4f.jpg' has type str, but expected one of: bytes
and the data is
feature_map = {
standard_fields.TfExampleFields.object_bbox_ymin:
dataset_util.float_list_feature(
filtered_data_frame_boxes.YMin.to_numpy()),
standard_fields.TfExampleFields.object_bbox_xmin:
dataset_util.float_list_feature(
filtered_data_frame_boxes.XMin.to_numpy()),
standard_fields.TfExampleFields.object_bbox_ymax:
dataset_util.float_list_feature(
filtered_data_frame_boxes.YMax.to_numpy()),
standard_fields.TfExampleFields.object_bbox_xmax:
dataset_util.float_list_feature(
filtered_data_frame_boxes.XMax.to_numpy()),
standard_fields.TfExampleFields.object_class_text:
dataset_util.bytes_list_feature(
filtered_data_frame_boxes.LabelName.to_numpy()),
standard_fields.TfExampleFields.object_class_label:
dataset_util.int64_list_feature(
filtered_data_frame_boxes.LabelName.map(lambda x: label_map[x])
.to_numpy()),
standard_fields.TfExampleFields.filename:
dataset_util.bytes_feature('{}.jpg'.format(image_id)),
standard_fields.TfExampleFields.source_id:
dataset_util.bytes_feature(image_id),
standard_fields.TfExampleFields.image_encoded:
dataset_util.bytes_feature(encoded_image),
}
and the input file looks like...
ImageID,Source,LabelName,Confidence,XMin,XMax,YMin,YMax,IsOccluded,IsTruncated,IsGroupOf,IsDepiction,IsInside
00e71a70a2f669ff,xclick,/m/09b5t,1,0.18049793,0.95435685,0.056603774,0.9638365,0,1,0,0,0
01463f5494340d3d,xclick,/m/09b5t,1,0,0.59791666,0.2125,0.965625,0,0,0,0,0
ok screw it. stackoverflow time.
https://stackoverflow.com/questions/64072148/typeerror-has-type-str-but-expected-one-of-bytes
Looks like it's a current bug: https://github.com/tensorflow/models/issues/7997
ok turns out I actually worked it out yesterday with .encode('utf-8'), but it went on to the same bug on the next line.
Ok now it generated some TFRecords.
So now we can train it...
As explained here: https://towardsdatascience.com/custom-object-detection-using-tensorflow-from-scratch-e61da2e10087
The models directory came with a notebook file (.ipynb) that we can use to get inference with a few tweaks. It is located at models/research/object_detection/object_detection_tutorial.ipynb. Follow the steps below to tweak the notebook:
Comment out cell #5 completely (just below Download Model)
Since we’re only testing on one image, comment out PATH_TO_TEST_IMAGES_DIR and TEST_IMAGE_PATHS in cell #9 (just below Detection)
In cell #11 (the last cell), remove the for-loop, unindent its content, and add path to your test image:
imagepath = 'path/to/image_you_want_to_test.jpg
After following through the steps, run the notebook and you should see the corgi in your test image highlighted by a bounding box!
or
b) Install pytorch, detectron2 (i keep thinking deceptron2), convert OIDv6 or Pascal VOC formats to COCO format (or ssh rsync the egg data files over to the new machine), and train Mask-RCNN, like with the eggs dataset? (I am using my friend’s server because my laptop can’t handle the training. Keeps freezing.)
or
c) Get EfficientDet running: Strangely, https://github.com/google/automl only contains EfficientDet. Is that AutoML? EfficientDet? Surely not. Odd.
Ok…
At this point i’m ok with just trying to get anything working. Bounding boxes. Ok. After an hour of just looking at options, probably B.
Ended up doing A. Seems Google just got Tensorflow 2’s Object detection API working working recently: https://blog.tensorflow.org/2020/07/tensorflow-2-meets-object-detection-api.html
TF2 is based on top of Keras. From what I can tell so far, the main difference between TF2 and PyTorch is that you can modify neural architecture at runtime with PyTorch. But TF2 has Keras, which has an elegant way to describe neural network architecture in code
So, one thing to note, is that when I decide to attempt object segmentation again, the process will probably follow @nicolas.windt’s tutorial but with this file instead (for train- and test- and validate-). https://storage.googleapis.com/openimages/v5/train-annotations-object-segmentation.csv
For now, got the images, and will try train with the TF2 OD-API, starting with one of the models in the zoo: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md
AdelaiDet is an open source toolbox for multiple instance-level recognition tasks on top of Detectron2. All instance-level recognition works from our group are open-sourced here.
To date, AdelaiDet implements the following algorithms:
“When applied to a reinforcing dataset containing 27,828 images of chickens in a stunned state, the identification accuracy of the model was 98.06%. This was significantly higher than both the established back propagation neural network model (90.11%) and another Faster-RCNN model (96.86%). The proposed algorithm can complete the inspection of the stunned state of more than 40,000 broilers per hour. The approach can be used for online inspection applications to increase efficiency, reduce labor and cost, and yield significant benefits for poultry processing plants.” https://www.sciencedirect.com/science/article/pii/S0032579119579093
Their abstract frames benefit in terms of slaughtering efficiency. Interesting ‘local optima’ ethics-wise. But yes, since we kill 178 million broiler chickens a day, we should at least have an AI checking that the stunning worked. Perhaps implement some “Ethics policy” to re-stun the chicken, if not properly stunned.
(Stunning means the conveyor belt dipping chickens’ heads into electrified water, to stun them, so their heads dangle and can be ripped off mechanically)