Categories
chicken_research

ZA egg farm

That one Hayley posted in the Vegan group – successfully growing family run start-up in Phillipi : http://chamomilefarming.co.za/

Small but growing nasty battery farm.

______________________________________________________

Dear Achmad
I hope you are good!

My name is Miranda, and I’m a Cape Townian, but am currently studying a Master’s degree in Sustainable design in Sweden.
Although I’m far away from home, my heart is still there, and the work I do in sustainability is still for South Africa. 
I’m currently working on a project which aims to design cheap technological aids which will improve egg farming, for both the farmers and the chickens. 
I was wondering if you would be open to having an online / watsapp call interview? I would like to know if my ideas would be useful for a farm like yours, and how my inventions could be tailored to a chicken farm’s needs. 
I found out about chamomile farming from the Newspaper article, and was inspired by your story. I hope my inventions would be able to help small, but successfully growing  family-run businesses like yours. 
I hope to hear from you!

Best,Miranda. 

Categories
dev Linux

Low Linux memory

Spent enough time on this to warrant a note.

For some reason, pip install torch, which is what I was trying to do, kept dying. It’s a 700MB file, and top showed out of memory.

Ultimately the fix for that was:

pip install torch --no-cache-dir

(something was wrong with the cache I guess)

I also ended up deleting the contents of ~/.cache/pip which was 2.2GB. The new pip cache purge only clears wheels related libs.

Anyway, trying to do development on a 23GB chromebook with GalliumOS gets tough.

I spend a lot of time moving things around. I got myself an NVMe SSD, with 512GB to alleviate the situation.

The most common trick for looking at memory is df -h for seeing memory use, and du -h --max-depth=1 to see how big the directories are, below your current dir.

So, first thing first, the SSD doesn’t want to show up. Ah, the USB-C wasn’t pushed in all the way. Derp.

Second, to clear up some space, linux has journal logs.

https://unix.stackexchange.com/questions/139513/how-to-clear-journalctl :

set a max amount of logs to retain (by time/space):
journalctl --vacuum-time=2d
journalctl --vacuum-size=500M

The third thing is to make some more swap space, just in case.

touch /media/chrx/0FEC49A4317DA4DA/swapfile
cd /media/chrx/0FEC49A4317DA4DA/
sudo dd if=/dev/zero of=swapfile bs=2048 count=1048576
mkswap swapfile
swapon swapfile

swapon

NAME                          TYPE           SIZE   USED PRIO
/dev/zram0                    partition      5.6G 452.9M -2
/media/chrx/0FEC49A4317DA4DA/ swapfile file  2G       0B -3

Ok probably didn’t need more swap space. Not sure where /dev/zram0 is, but maybe I can free up more of it, and up the priority of the SSD?

Anyway, torch is installed now, so nevermind, until I need more memory.

Some more tricks:

Remove thumbnails:

du -sh ~/.cache/thumbnails

rm -rf ~/.cache/thumbnails/*

Clean apt cache:

sudo apt-get clean

Categories
3D Research AI/ML CNNs deep Vision

Mesh R-CNN

This https://github.com/facebookresearch/meshrcnn is maybe getting closer to holy grail in my mind. I like the idea of bridging the gap between simulation and reality in the other direction too. By converting the world into object meshes. Real2Sim.

The OpenAI Rubik’s cube hand policy transfer was done with camera in simulation and camera in real world. This could allow a sort of dreaming, i.e., running simulations on new 3d obj data.)

It could acquire data that it could mull over, when chickens are asleep.

PyTorch3d: https://arxiv.org/pdf/2007.08501.pdf

Pixel2Mesh: Generating 3D Mesh Models
from Single RGB Images https://arxiv.org/pdf/1804.01654.pdf

Remember Hinton’s dark knowledge. The trick is having a few models distill into one.

In trying to get Mesh R-CNN working, I had to add DEVICE=CPU to the config.

python3 demo/demo.py --config-file configs/pix3d/meshrcnn_R50_FPN.yaml --input /home/chrx/Downloads/chickenegg.jpg --output output_demo --onlyhighest MODEL.WEIGHTS meshrcnn://meshrcnn_R50.pth

Success! It’s a chair.

There’s no chicken category in Pix3d. But getting closer. Just need a chicken and egg dataset.

Downloading blender again, to check out the obj file that was generated. Ok Blender doesn’t want to show it, but here’s a handy site https://3dviewer.net/ to view OBJ files. The issue in blender required selecting the obj, then View > Frame Selected to make it zoom in. Switching to orthographic from perspective view also helps.

Chair is a pretty adaptable class.

Categories
AI/ML CNNs dev institutes OpenCV Vision

Detectron2

Ran through the nice working jupyter notebook https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5#scrollTo=OpLg_MAQGPUT and produced this video

It is the Mask R-CNN algorithm from matterport, ported over by facebook labs, and better maintained. It was forked and fixed up for tourists.

We can train it on the robot eye view camera, maybe train it on google images of copyleft chickens and eggs.

I think this looks great, for endowing the robot with a basic “recognition” of the features of classes it’s been exposed to.

https://github.com/facebookresearch/detectron2/tree/master/projects

https://detectron2.readthedocs.io/tutorials/extend.html

Seems I was oblivious to Facebook AI but of course they hire very smart people. I’d sell my soul for $240k/yr too. It is super nice to get a working Jupyter Notebook. Thank you. https://ai.facebook.com/blog/-detectron2-a-pytorch-based-modular-object-detection-library-/

Here are the other FB project using detectron2, copy pasted:

Projects by Facebook

Note that these are research projects, and therefore may not have the same level of support or stability as detectron2.

External Projects

External projects in the community that use detectron2:

Also, more generally, https://ai.facebook.com/research/#recent-projects

Errors encountered while attempting to install https://detectron2.readthedocs.io/tutorials/getting_started.html

File "demo.py", line 8, in
import tqdm
ImportError: No module named tqdm

pip3 uninstall tqdm
pip3 install tqdm

Ok so…

python3 -m pip install -e .

python3 demo.py --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml --webcam --opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl

Requires pyyaml>=5.1

ok

pip install pyyaml==5.1
 Successfully built pyyaml
Installing collected packages: pyyaml
Attempting uninstall: pyyaml
Found existing installation: PyYAML 3.12
ERROR: Cannot uninstall 'PyYAML'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.

pip3 install --ignore-installed PyYAML
Successfully installed PyYAML-5.1

Next error...

ModuleNotFoundError: No module named 'torchvision'

pip install torchvision

Next error...

AssertionError:
Found no NVIDIA driver on your system. Please check that you
have an NVIDIA GPU and installed a driver from
http://www.nvidia.com/Download/index.aspx


ok

python3 demo.py --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml --webcam --opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl MODEL.DEVICE cpu


[08/17 20:53:11 detectron2]: Arguments: Namespace(confidence_threshold=0.5, config_file='../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml', input=None, opts=['MODEL.WEIGHTS', 'detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl', 'MODEL.DEVICE', 'cpu'], output=None, video_input=None, webcam=True)
[08/17 20:53:12 fvcore.common.checkpoint]: Loading checkpoint from detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl
[08/17 20:53:12 fvcore.common.file_io]: Downloading https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl …
[08/17 20:53:12 fvcore.common.download]: Downloading from https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl …
model_final_f10217.pkl: 178MB [01:26, 2.05MB/s]
[08/17 20:54:39 fvcore.common.download]: Successfully downloaded /root/.torch/fvcore_cache/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl. 177841981 bytes.
[08/17 20:54:39 fvcore.common.file_io]: URL https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl cached in /root/.torch/fvcore_cache/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl
[08/17 20:54:39 fvcore.common.checkpoint]: Reading a file from 'Detectron2 Model Zoo'
0it [00:00, ?it/s]/opt/detectron2/detectron2/layers/wrappers.py:226: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(*, bool as_tuple) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:766.)
return x.nonzero().unbind(1)
0it [00:06, ?it/s]
Traceback (most recent call last):
File "demo.py", line 118, in
cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL)
cv2.error: OpenCV(4.3.0) /io/opencv/modules/highgui/src/window.cpp:634: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvNamedWindow'


Ok...

pip install opencv-python

Requirement already satisfied: opencv-python in /usr/local/lib/python3.6/dist-packages (4.2.0.34)

Looks like 4.3.0 vs 4.2.0.34 kinda thing


sudo apt-get install libopencv-*


nope...

/opt/detectron2/detectron2/layers/wrappers.py:226: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(*, bool as_tuple) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:766.)
return x.nonzero().unbind(1)


def nonzero_tuple(x):
"""
A 'as_tuple=True' version of torch.nonzero to support torchscript.
because of https://github.com/pytorch/pytorch/issues/38718
"""
if x.dim() == 0:
return x.unsqueeze(0).nonzero().unbind(1)
return x.nonzero(as_tuple=True).unbind(1)

AttributeError: 'tuple' object has no attribute 'unbind'


https://github.com/pytorch/pytorch/issues/38718

FFS. Why does nothing ever fucking work ?
pytorch 1.6:
"putting 1.6.0 milestone for now; this isn't the worst, but it's a pretty bad user experience."

Yeah no shit.

let's try...

return x.nonzero(as_tuple=False).unbind(1)

Ok next error same

/opt/detectron2/detectron2/modeling/roi_heads/fast_rcnn.py:111


Ok... back to this error (after adding as_tuple=False twice)


 File "demo.py", line 118, in
cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL)
cv2.error: OpenCV(4.3.0) /io/opencv/modules/highgui/src/window.cpp:634: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvNamedWindow'

Decided to check if maybe this is a conda vs pip thing. Like maybe I just need to install the conda version instead?

But it looks like a GTK+ 2.x isn’t installed. Seems I installed it using pip, i.e. pip install opencv-contrib-python and that isn’t built with gtk+2.x. I can also use qt as the graphical interface.

GTK supposedly uses more memory because GTK provides more functionality. Qt does less and uses less memory. If that is your logic, then you should also look at Aura and the many other user interface libraries providing less functionality.” (link )

https://stackoverflow.com/questions/14655969/opencv-error-the-function-is-not-implemented

https://askubuntu.com/questions/913241/error-in-executing-opencv-in-ubuntu

So let’s make a whole new Chapter, because we’re installing OpenCV again! (Why? Because I want to try run the detectron2 demo.py file.)

pip3 uninstall opencv-python
pip3 uninstall opencv-contrib-python 

(or sudo apt-get remove ___)

and afterwards build the opencv package from source code from github.

git clone https://github.com/opencv/opencv.git

cd ~/opencv

mkdir release

cd release

cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_TBB=ON -D BUILD_NEW_PYTHON_SUPPORT=ON -D WITH_V4L=ON -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_EXAMPLES=ON -D WITH_QT=ON -D WITH_GTK=ON -D WITH_OPENGL=ON ..

make

sudo make install

ok… pls…

python3 demo.py –config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml –webcam –opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl MODEL.DEVICE cpu

sweet jaysus finally.

Here’s an image of the network from a medium article on RCNN: https://medium.com/@hirotoschwert/digging-into-detectron-2-47b2e794fabd

Image for post
Categories
chicken_research

Email Template

Dear ___________

Firstly, an apology that I can’t speak Swedish!

My name is Miranda and I am a Master’s student in Sustainable Design at Linnaeus University in Växjö. I am working on a practical research project which aims to decrease cruelty in the egg farming industry by using technology to provide humane alternatives for large-scale commercial chicken farms

I found out about your farm from the Reko Ring, and I was wondering if you may be open to me coming to visit to do some research? Your small scale, free-range practices are exactly what we would like to convince big agri-business should be possible in the future.

The research would entail me taking some video footage of the chickens and where they live, seeing how they respond to a small robot prototype, and hopefully, if you have time, asking you some questions about your sustainable chicken farming practices.

I hope to hear from you, and that you are enjoying the sunshine!

All the very best,

Miranda Moss.

_________________________________________________________

Dear  Ulrika and Tomas  

Firstly, an apology that I can’t speak Swedish! I hope that’s ok.

My name is Miranda and I am a Master’s student in Sustainable Design at Linnaeus University in Växjö. I am working on a practical research project which aims to decrease cruelty in the mass egg farming industry by using technology to provide humane, cost-effective alternatives for large-scale commercial battery farms.

I found out about your farm from the Reko Ring, and I was wondering if you may be open to me coming to visit to do some research? Your small scale, free-range practices are exactly what we would like to convince big agri-business should be possible in the future.

The research would entail me taking some video footage of the chickens and where they live, seeing how they respond to a small robot prototype, and hopefully, if you have time, asking you some questions about your sustainable chicken farming practices.

I hope that you will be interested in contributing to research in pursuit of a more sustainable future! Please let me know if you have any questions. 

All the very best,
Miranda Moss.  

Categories
gripper prototypes

not a hand

I made this with foam and beads and string and gravity, and to be safe 2x 11kg/cm servos (I think). But that’s cos it wasn’t a very efficient mechanism I guess, needing gravity and all. Anyway. It would be capable of picking stuff up, if it wasn’t a “flower”.

Categories
Gripper Research

Gripper Copy Pasta

Inmoov hand:

https://inmoov.fr/finger-starter/

https://inmoov.fr/hand-and-forarm/

Categories
MFRU

RC?

Might be nice to have RC control mode for the robot too, for interacting with the chickens

Categories
Hardware

servos

Bummer, I have two 11kg torque servos, but theyre those continuous rotation MG996Rs. So I guess its sg90s until we get some production cash. They are expensive in this part of the world......

Either way, Ill try make a chassis thats easy to swap out for bulkier bois. but I also would like to make custom servo horns only once, and the likelihood of the sg90s horn`s fitting some metal geared bad boy is very unlikely.

Unless we can fiddle with the eg. code so that 2 servos = which would carry the most weight- are programmed as continuous rotation ones?

I quickly tried some cont. servo example code on RPi , and it worked, but after the program finishes the motherfuckers just keep spinning forever, as happened with you in your previous post. what a mess.

Categories
3D prototypes doin a heckin inspire Gripper Locomotion

virtual robot structure


// also had a thought- if we have the arm hanging from bottom instead of reaching from top – it could be more like ‘tail’, and thus the whole robot moves the other way. That’ll be freaky as hell, ha, kinda exorcisty, lol. Also, if its coming out the bottom that’s gonna be a bit structurally / weightedly dodgy – so we need a nice “not in use” / walking / idle position, or maybe it helps with locomotion too, I dunno. See eg. of spot with arm below for nice inspires.

Heh I love how Spot1 is like awwww I wanna open this door but I have no arms!!! 🙁 Booooo, Spot_with_arm plrz herlp thnx k I’m useless imma go now and go get pushed with hockey sticks