Categories
chicken_research

for first time chicken owners

  1. not having coop ready before buying chickens.
  2. don’t overcrowd your coop
  3. enough animal protein, high quality greens (alfalfa hay, variety), well rounded. They need grit. That’s how they digest.
  4. chickens can get worms. poopy eggs bad. Use diatomaceous earth.
  5. need shade and privacy
Categories
AI/ML institutes

HumaneAI

Toward AI Systems that Augment and Empower
Humans by Understanding Us, our Society and the
World Around Us

https://www.humane-ai.eu/

Humane AI Concept and Research Plan

https://www.humane-ai.eu/wp-content/uploads/2019/11/D21-HumaneAI-Concept.pdf

Categories
chicken_research robots

Existing Chicken Robots

Found that Wageningen University in the Netherlands has a PoultryBot.

A bunch more here: https://www.denbow.com/8-digital-technologies-poultry-producers/

Obviously they’re all for industry. China and Thailand apparently has ‘robot nannies‘ monitoring chicken health. There’s also some robots for feeding chickens, and for keeping them on their toes, by driving into them

PoulBot is a bit closer to the magic side, studying social ecology, with a robot for chicks to imprint on, who takes chicks around, and counts the chicks and goes back for any left behind. It beeps instead of clucks. The above-head camera is probably not realistic for most settings, but it allows a useful behaviour they call avoid-running-over-chick. The idea is to work out what cues are required for chicks to accept a robot as a mother.

Categories
MFRU

MFRU footage

Was hoping to get this kind of footage:

Could fake a bit of robot navigation in editing for these static ones

heh. This rings a bell; did you find it before?

https://www.youtube.com/watch?v=n0E_aZ5C0x8

ermagherd:

oof:

Heh

omg if you have the stomach please report all the ‘cars vs chickens’ videos on YouTube, I can’t even click on them to do that holy shit. Hoping they are not what the thumbnail implies :/

……………………………………………………………………

Awwww little flufflets :/

https://www.youtube.com/watch?v=B4zEa_7Ejtk

Never forget:

heh:

Categories
dev Locomotion The Sentient Table

Spinning Up

OpenAI Spinning Up https://spinningup.openai.com/en/latest/

So I’ve got it working, OpenAI’s PPO. https://github.com/openai/spinningup/issues/142 – Needs a workaround to run your own envs.

But I can’t work out how to increase the exploration “factor”.

It’s some sort of application of a Gaussian distribution of noise, I think, which is the simple idea.

It looks like clipratio is maybe what i need.

hmm but ppo don’t want it.

parser.add_argument('--env', type=str, default='HalfCheetah-v2')
parser.add_argument('--hid', type=int, default=64)
parser.add_argument('--l', type=int, default=2)
parser.add_argument('--gamma', type=float, default=0.99)
parser.add_argument('--seed', '-s', type=int, default=0)
parser.add_argument('--cpu', type=int, default=4)
parser.add_argument('--steps', type=int, default=4000)
parser.add_argument('--epochs', type=int, default=50)
parser.add_argument('--exp_name', type=str, default='ppo')


def ppo(env_fn, actor_critic=core.mlp_actor_critic, ac_kwargs=dict(), seed=0,
        steps_per_epoch=4000, epochs=50, gamma=0.99, 


clip_ratio=0.2, 


pi_lr=3e-4,
        vf_lr=1e-3, train_pi_iters=80, train_v_iters=80, lam=0.97, max_ep_len=1000,
        target_kl=0.01, logger_kwargs=dict(), save_freq=10):

https://github.com/openai/spinningup/issues/12

ok so i tried SAC algo too, and the issue i have now is

(, AttributeError(“‘list’ object has no attribute ‘reshape’”,), )

So the thing is the dimensionality

“FetchReach environment has Dict observation space (because it packages not only arm position, but also the target location into the observation), and spinning up does not implement support for Dict observation spaces yet. One thing you can do is add a FlattenDictWrapper from gym (for example usage see, for instance,

env = FlattenDictWrapper(env, [‘observation’, ‘desired_goal’])

Spinning Up implementations currently only support envs with Box observation spaces (where observations are real-valued vectors). These environments have Dict observation spaces, so each obs is a dict of (key, vector) pairs. If you want to test things out in these envs, I recommend doing it as a hacking project! 🙂 “

Categories
AI/ML deep

ML/DL COMPENDIUMS

Noteworthy compilation of resources relevant to Dr. Ori Cohen’s work:

https://docs.google.com/document/d/1wvtcwc8LOb3PZI9huQOD7UjqUoY98N5r3aQsWKNAlzk/edit#

Gilbert Tanner:

https://gilberttanner.com/

Eugene Yan:

https://github.com/eugeneyan/applied-ml

Categories
AI/ML Vision

Simple Object Detection

Looks like you can just run Tensorflow.js with COCO really easily.

import * as cocoSsd from "@tensorflow-models/coco-ssd";

const image = document.getElementById("image")

cocoSsd.load()
  .then(model => model.detect(image))
  .then(predictions => console.log(predictions))

But then you just get the categories COCO was trained on.  Bird.

Categories
AI/ML

TuriCreate

Apple also has an ML framework:

https://github.com/apple/turicreate

Turi Create simplifies the development of custom machine learning models. You don’t have to be a machine learning expert to add recommendations, object detection, image classification, image similarity or activity classification to your app.

  • Easy-to-use: Focus on tasks instead of algorithms
  • Visual: Built-in, streaming visualizations to explore your data
  • Flexible: Supports text, images, audio, video and sensor data
  • Fast and Scalable: Work with large datasets on a single machine
  • Ready To Deploy: Export models to Core ML for use in iOS, macOS, watchOS, and tvOS apps

With Turi Create, you can accomplish many common ML tasks:

ML TaskDescription
RecommenderPersonalize choices for users
Image ClassificationLabel images
Drawing ClassificationRecognize Pencil/Touch Drawings and Gestures
Sound ClassificationClassify sounds
Object DetectionRecognize objects within images
One Shot Object DetectionRecognize 2D objects within images using a single example
Style TransferStylize images
Activity ClassificationDetect an activity using sensors
Image SimilarityFind similar images
ClassifiersPredict a label
RegressionPredict numeric values
ClusteringGroup similar datapoints together
Text ClassifierAnalyze sentiment of messages
Categories
AI/ML

Localised Narratives

https://google.github.io/localized-narratives/

One of Google’s recent projects, “image annotations connecting vision and language”.

It’s one of the features of the new Open Images v6 dataset.

Sounds like a good behaviour detection mechanism.

Categories
CNNs Vision

EfficientDet

Google also has a new detection algorithm and it looks much faster than Mask RCNN https://ai.googleblog.com/2020/04/efficientdet-towards-scalable-and.html

EfficientNet: Improving Accuracy and Efficiency through AutoML and Model Scaling

Of course the state of the art keeps improving, but this looks like a stepping stone.

EfficientDet

github: https://github.com/google/automl/tree/master/efficientdet

arxiv: https://arxiv.org/pdf/1911.09070.pdf

This is the S.O.T.A. The G.O.A.T. of 2020. So we’ll try it out:

https://heartbeat.fritz.ai/end-to-end-object-detection-using-efficientdet-on-raspberry-pi-3-part-2-bb5133646630

https://towardsdatascience.com/custom-object-detection-using-tensorflow-from-scratch-e61da2e10087

https://gilberttanner.com/categories/object-detection

TF2 Model Zoo introduces new SOTA models such as CenterNetExtremeNet, and EfficientDet.

https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md