My name is Miranda, and I’m a Cape Townian, but am currently studying a Master’s degree in Sustainable design in Sweden. Although I’m far away from home, my heart is still there, and the work I do in sustainability is still for South Africa. I’m currently working on a project which aims to design cheap technological aids which will improve egg farming, for both the farmers and the chickens. I was wondering if you would be open to having an online / watsapp call interview? I would like to know if my ideas would be useful for a farm like yours, and how my inventions could be tailored to a chicken farm’s needs. I found out about chamomile farming from the Newspaper article, and was inspired by your story. I hope my inventions would be able to help small, but successfully growing family-run businesses like yours. I hope to hear from you!
For some reason, pip install torch, which is what I was trying to do, kept dying. It’s a 700MB file, and top showed out of memory.
Ultimately the fix for that was:
pip install torch --no-cache-dir
(something was wrong with the cache I guess)
I also ended up deleting the contents of ~/.cache/pip which was 2.2GB. The new pip cache purge only clears wheels related libs.
Anyway, trying to do development on a 23GB chromebook with GalliumOS gets tough.
I spend a lot of time moving things around. I got myself an NVMe SSD, with 512GB to alleviate the situation.
The most common trick for looking at memory is df -h for seeing memory use, and du -h --max-depth=1 to see how big the directories are, below your current dir.
So, first thing first, the SSD doesn’t want to show up. Ah, the USB-C wasn’t pushed in all the way. Derp.
Second, to clear up some space, linux has journal logs.
This https://github.com/facebookresearch/meshrcnn is maybe getting closer to holy grail in my mind. I like the idea of bridging the gap between simulation and reality in the other direction too. By converting the world into object meshes. Real2Sim.
The OpenAI Rubik’s cube hand policy transfer was done with camera in simulation and camera in real world. This could allow a sort of dreaming, i.e., running simulations on new 3d obj data.)
It could acquire data that it could mull over, when chickens are asleep.
There’s no chicken category in Pix3d. But getting closer. Just need a chicken and egg dataset.
Downloading blender again, to check out the obj file that was generated. Ok Blender doesn’t want to show it, but here’s a handy site https://3dviewer.net/ to view OBJ files. The issue in blender required selecting the obj, then View > Frame Selected to make it zoom in. Switching to orthographic from perspective view also helps.
pip install pyyaml==5.1
Successfully built pyyaml
Installing collected packages: pyyaml
Attempting uninstall: pyyaml
Found existing installation: PyYAML 3.12
ERROR: Cannot uninstall 'PyYAML'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
pip3 install --ignore-installed PyYAML
Successfully installed PyYAML-5.1
Next error...
ModuleNotFoundError: No module named 'torchvision'
pip install torchvision
Next error...
AssertionError:
Found no NVIDIA driver on your system. Please check that you
have an NVIDIA GPU and installed a driver from
http://www.nvidia.com/Download/index.aspx
ok
python3 demo.py --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml --webcam --opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl MODEL.DEVICE cpu
[08/17 20:53:11 detectron2]: Arguments: Namespace(confidence_threshold=0.5, config_file='../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml', input=None, opts=['MODEL.WEIGHTS', 'detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl', 'MODEL.DEVICE', 'cpu'], output=None, video_input=None, webcam=True)
[08/17 20:53:12 fvcore.common.checkpoint]: Loading checkpoint from detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl
[08/17 20:53:12 fvcore.common.file_io]: Downloading https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl …
[08/17 20:53:12 fvcore.common.download]: Downloading from https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl …
model_final_f10217.pkl: 178MB [01:26, 2.05MB/s]
[08/17 20:54:39 fvcore.common.download]: Successfully downloaded /root/.torch/fvcore_cache/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl. 177841981 bytes.
[08/17 20:54:39 fvcore.common.file_io]: URL https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl cached in /root/.torch/fvcore_cache/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl
[08/17 20:54:39 fvcore.common.checkpoint]: Reading a file from 'Detectron2 Model Zoo'
0it [00:00, ?it/s]/opt/detectron2/detectron2/layers/wrappers.py:226: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(*, bool as_tuple) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:766.)
return x.nonzero().unbind(1)
0it [00:06, ?it/s]
Traceback (most recent call last):
File "demo.py", line 118, in
cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL)
cv2.error: OpenCV(4.3.0) /io/opencv/modules/highgui/src/window.cpp:634: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvNamedWindow'
Ok...
pip install opencv-python
Requirement already satisfied: opencv-python in /usr/local/lib/python3.6/dist-packages (4.2.0.34)
Looks like 4.3.0 vs 4.2.0.34 kinda thing
sudo apt-get install libopencv-*
nope...
/opt/detectron2/detectron2/layers/wrappers.py:226: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(*, bool as_tuple) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:766.)
return x.nonzero().unbind(1)
def nonzero_tuple(x):
"""
A 'as_tuple=True' version of torch.nonzero to support torchscript.
because of https://github.com/pytorch/pytorch/issues/38718
"""
if x.dim() == 0:
return x.unsqueeze(0).nonzero().unbind(1)
return x.nonzero(as_tuple=True).unbind(1)
AttributeError: 'tuple' object has no attribute 'unbind'
https://github.com/pytorch/pytorch/issues/38718
FFS. Why does nothing ever fucking work ?
pytorch 1.6:
"putting 1.6.0 milestone for now; this isn't the worst, but it's a pretty bad user experience."
Yeah no shit.
let's try...
return x.nonzero(as_tuple=False).unbind(1)
Ok next error same
/opt/detectron2/detectron2/modeling/roi_heads/fast_rcnn.py:111
Ok... back to this error (after adding as_tuple=False twice)
File "demo.py", line 118, in
cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL)
cv2.error: OpenCV(4.3.0) /io/opencv/modules/highgui/src/window.cpp:634: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvNamedWindow'
Decided to check if maybe this is a conda vs pip thing. Like maybe I just need to install the conda version instead?
But it looks like a GTK+ 2.x isn’t installed. Seems I installed it using pip, i.e. pip install opencv-contrib-python and that isn’t built with gtk+2.x. I can also use qt as the graphical interface.
“GTK supposedly uses more memory because GTK provides more functionality. Qt does less and uses less memory. If that is your logic, then you should also look at Aura and the many other user interface libraries providing less functionality.” (link )
My name is Miranda and I am a Master’s student in Sustainable Design at Linnaeus University in Växjö. I am working on a practical research project which aims to decrease cruelty in the egg farming industry by using technology to provide humane alternatives for large-scale commercial chicken farms
I found out about your farm from the Reko Ring, and I was wondering if you may be open to me coming to visit to do some research? Your small scale, free-range practices are exactly what we would like to convince big agri-business should be possible in the future.
The research would entail me taking some video footage of the chickens and where they live, seeing how they respond to a small robot prototype, and hopefully, if you have time, asking you some questions about your sustainable chicken farming practices.
I hope to hear from you, and that you are enjoying the sunshine!
Firstly, an apology that I can’t speak Swedish! I hope that’s ok.
My name is Miranda and I am a Master’s student in Sustainable Design at Linnaeus University in Växjö. I am working on a practical research project which aims to decrease cruelty in the mass egg farming industry by using technology to provide humane, cost-effective alternatives for large-scale commercial battery farms.
I found out about your farm from the Reko Ring, and I was wondering if you may be open to me coming to visit to do some research? Your small scale, free-range practices are exactly what we would like to convince big agri-business should be possible in the future.
The research would entail me taking some video footage of the chickens and where they live, seeing how they respond to a small robot prototype, and hopefully, if you have time, asking you some questions about your sustainable chicken farming practices.
I hope that you will be interested in contributing to research in pursuit of a more sustainable future! Please let me know if you have any questions.
I made this with foam and beads and string and gravity, and to be safe 2x 11kg/cm servos (I think). But that’s cos it wasn’t a very efficient mechanism I guess, needing gravity and all. Anyway. It would be capable of picking stuff up, if it wasn’t a “flower”.
Bummer, I have two 11kg torque servos, but theyre those continuous rotation MG996Rs. So I guess its sg90s until we get some production cash. They are expensive in this part of the world......
Either way, Ill try make a chassis thats easy to swap out for bulkier bois. but I also would like to make custom servo horns only once, and the likelihood of the sg90s horn`s fitting some metal geared bad boy is very unlikely.
Unless we can fiddle with the eg. code so that 2 servos = which would carry the most weight- are programmed as continuous rotation ones?
I quickly tried some cont. servo example code on RPi , and it worked, but after the program finishes the motherfuckers just keep spinning forever, as happened with you in your previous post. what a mess.
// also had a thought- if we have the arm hanging from bottom instead of reaching from top – it could be more like ‘tail’, and thus the whole robot moves the other way. That’ll be freaky as hell, ha, kinda exorcisty, lol. Also, if its coming out the bottom that’s gonna be a bit structurally / weightedly dodgy – so we need a nice “not in use” / walking / idle position, or maybe it helps with locomotion too, I dunno. See eg. of spot with arm below for nice inspires.
Heh I love how Spot1 is like awwww I wanna open this door but I have no arms!!! 🙁 Booooo, Spot_with_arm plrz herlp thnx k I’m useless imma go now and go get pushed with hockey sticks