- not having coop ready before buying chickens.
- don’t overcrowd your coop
- enough animal protein, high quality greens (alfalfa hay, variety), well rounded. They need grit. That’s how they digest.
- chickens can get worms. poopy eggs bad. Use diatomaceous earth.
- need shade and privacy
HumaneAI
Toward AI Systems that Augment and Empower
Humans by Understanding Us, our Society and the
World Around Us
Humane AI Concept and Research Plan
https://www.humane-ai.eu/wp-content/uploads/2019/11/D21-HumaneAI-Concept.pdf
Existing Chicken Robots
Found that Wageningen University in the Netherlands has a PoultryBot.
A bunch more here: https://www.denbow.com/8-digital-technologies-poultry-producers/
Obviously they’re all for industry. China and Thailand apparently has ‘robot nannies‘ monitoring chicken health. There’s also some robots for feeding chickens, and for keeping them on their toes, by driving into them…
PoulBot is a bit closer to the magic side, studying social ecology, with a robot for chicks to imprint on, who takes chicks around, and counts the chicks and goes back for any left behind. It beeps instead of clucks. The above-head camera is probably not realistic for most settings, but it allows a useful behaviour they call avoid-running-over-chick. The idea is to work out what cues are required for chicks to accept a robot as a mother.
MFRU footage
Was hoping to get this kind of footage:
Could fake a bit of robot navigation in editing for these static ones
heh. This rings a bell; did you find it before?
ermagherd:
oof:
Heh
omg if you have the stomach please report all the ‘cars vs chickens’ videos on YouTube, I can’t even click on them to do that holy shit. Hoping they are not what the thumbnail implies :/
……………………………………………………………………
Awwww little flufflets :/
Never forget:
heh:
Spinning Up
OpenAI Spinning Up https://spinningup.openai.com/en/latest/
So I’ve got it working, OpenAI’s PPO. https://github.com/openai/spinningup/issues/142 – Needs a workaround to run your own envs.
But I can’t work out how to increase the exploration “factor”.
It’s some sort of application of a Gaussian distribution of noise, I think, which is the simple idea.
It looks like clipratio is maybe what i need.
hmm but ppo don’t want it.
parser.add_argument('--env', type=str, default='HalfCheetah-v2')
parser.add_argument('--hid', type=int, default=64)
parser.add_argument('--l', type=int, default=2)
parser.add_argument('--gamma', type=float, default=0.99)
parser.add_argument('--seed', '-s', type=int, default=0)
parser.add_argument('--cpu', type=int, default=4)
parser.add_argument('--steps', type=int, default=4000)
parser.add_argument('--epochs', type=int, default=50)
parser.add_argument('--exp_name', type=str, default='ppo')
def ppo(env_fn, actor_critic=core.mlp_actor_critic, ac_kwargs=dict(), seed=0,
steps_per_epoch=4000, epochs=50, gamma=0.99,
clip_ratio=0.2,
pi_lr=3e-4,
vf_lr=1e-3, train_pi_iters=80, train_v_iters=80, lam=0.97, max_ep_len=1000,
target_kl=0.01, logger_kwargs=dict(), save_freq=10):
https://github.com/openai/spinningup/issues/12
ok so i tried SAC algo too, and the issue i have now is
(, AttributeError(“‘list’ object has no attribute ‘reshape’”,), )
So the thing is the dimensionality
“FetchReach environment has Dict observation space (because it packages not only arm position, but also the target location into the observation), and spinning up does not implement support for Dict observation spaces yet. One thing you can do is add a FlattenDictWrapper from gym (for example usage see, for instance,
env = FlattenDictWrapper(env, [‘observation’, ‘desired_goal’])
Spinning Up implementations currently only support envs with Box observation spaces (where observations are real-valued vectors). These environments have Dict observation spaces, so each obs is a dict of (key, vector) pairs. If you want to test things out in these envs, I recommend doing it as a hacking project! “
Noteworthy compilation of resources relevant to Dr. Ori Cohen’s work:
https://docs.google.com/document/d/1wvtcwc8LOb3PZI9huQOD7UjqUoY98N5r3aQsWKNAlzk/edit#
Gilbert Tanner:
Eugene Yan:
Looks like you can just run Tensorflow.js with COCO really easily.
import * as cocoSsd from "@tensorflow-models/coco-ssd"; const image = document.getElementById("image") cocoSsd.load() .then(model => model.detect(image)) .then(predictions => console.log(predictions)) But then you just get the categories COCO was trained on. Bird.
TuriCreate
Apple also has an ML framework:
https://github.com/apple/turicreate
Turi Create simplifies the development of custom machine learning models. You don’t have to be a machine learning expert to add recommendations, object detection, image classification, image similarity or activity classification to your app.
- Easy-to-use: Focus on tasks instead of algorithms
- Visual: Built-in, streaming visualizations to explore your data
- Flexible: Supports text, images, audio, video and sensor data
- Fast and Scalable: Work with large datasets on a single machine
- Ready To Deploy: Export models to Core ML for use in iOS, macOS, watchOS, and tvOS apps
With Turi Create, you can accomplish many common ML tasks:
ML Task | Description |
---|---|
Recommender | Personalize choices for users |
Image Classification | Label images |
Drawing Classification | Recognize Pencil/Touch Drawings and Gestures |
Sound Classification | Classify sounds |
Object Detection | Recognize objects within images |
One Shot Object Detection | Recognize 2D objects within images using a single example |
Style Transfer | Stylize images |
Activity Classification | Detect an activity using sensors |
Image Similarity | Find similar images |
Classifiers | Predict a label |
Regression | Predict numeric values |
Clustering | Group similar datapoints together |
Text Classifier | Analyze sentiment of messages |
Localised Narratives
https://google.github.io/localized-narratives/
One of Google’s recent projects, “image annotations connecting vision and language”.
It’s one of the features of the new Open Images v6 dataset.
Sounds like a good behaviour detection mechanism.
Google also has a new detection algorithm and it looks much faster than Mask RCNN https://ai.googleblog.com/2020/04/efficientdet-towards-scalable-and.html
“EfficientNet: Improving Accuracy and Efficiency through AutoML and Model Scaling“
Of course the state of the art keeps improving, but this looks like a stepping stone.
EfficientDet
github: https://github.com/google/automl/tree/master/efficientdet
arxiv: https://arxiv.org/pdf/1911.09070.pdf
This is the S.O.T.A. The G.O.A.T. of 2020. So we’ll try it out:
https://towardsdatascience.com/custom-object-detection-using-tensorflow-from-scratch-e61da2e10087
https://gilberttanner.com/categories/object-detection
TF2 Model Zoo introduces new SOTA models such as CenterNet, ExtremeNet, and EfficientDet.