Categories
sexing

pattern analysis in hyperspectral images

In-ovo sexing of 14-day-old chicken embryos by pattern analysis in hyperspectral images (VIS/NIR spectra): A non-destructive method for layer lines with gender-specific down feather color

Doreen Göhler 1Björn Fischer 2Sven Meissner 2Affiliations expand

Free article https://www.sciencedirect.com/science/article/pii/S0032579119312398?via%3Dihub

science plz work on your diagram skills thnx

Abstract

Up to now there is no economically maintainable modality for chicken sexing in early embryonic stages (first 3 d) that is suitable for large-scale application in the commercial hatcheries. Hence, the culling of male day-old chicks of layer lines is still the normal procedure.In this paper we present a non-destructive optical technique for gender determination in layer lines with gender-specific down feather color. This particular chicken strain presents a sexual dimorphism in feather color, where the female day-old chicks have brown down feathers and the males have yellow down feathers.The eggs are candled with halogen lamps and a hyperspectral camera collects the transmitted light within the spectral range from 400 nm to 1,000 nm. For data analysis and classification, common methods like principal component analysis and linear discriminant analysis are used. The accuracy of gender determination was determined for 11- to 14-day-old embryos. At 14 d of incubation (7 d before hatch) the sex can be determined with an overall accuracy of approximately 97%.

Keywords: hyperspectral imaging; in ovo sexing; sexing of chicken embryo.

https://pubmed.ncbi.nlm.nih.gov/27591278/

I think this is the hypspectral camera they are using (SPECIM PFD ). Can’t find a price online easily. That’s probably not a good sign.

__________________________________________________________

CArrefour Press release:

CARREFOUR FRANCE STARTS IN OVO SEXING TRIAL

Carrefour’s cage-free progress // Ending the culling of male chicks

News Section Icon Published 10/02/2020

For the first time in France, an in ovo sexing technique is being trialled by French retailer Carrefour, in partnership with its supplier Les Fermiers de Loué and the AAT group, a global specialist in hatching. 

The fast and non-invasive technology for sexing the egg by spectrophotometry (i.e. colour analysis) makes it possible to identify the sex of the birds before they hatch, thus avoiding the need to cull male chicks after their birth in the egg production cycle.

https://www.compassioninfoodbusiness.com/our-news/2020/02/carrefour-france-starts-in-ovo-sexing-trial

Categories
doin a heckin inspire

Chickens vs The Internet

Categories
chicken_research

for first time chicken owners

  1. not having coop ready before buying chickens.
  2. don’t overcrowd your coop
  3. enough animal protein, high quality greens (alfalfa hay, variety), well rounded. They need grit. That’s how they digest.
  4. chickens can get worms. poopy eggs bad. Use diatomaceous earth.
  5. need shade and privacy
Categories
AI/ML institutes

HumaneAI

Toward AI Systems that Augment and Empower
Humans by Understanding Us, our Society and the
World Around Us

https://www.humane-ai.eu/

Humane AI Concept and Research Plan

https://www.humane-ai.eu/wp-content/uploads/2019/11/D21-HumaneAI-Concept.pdf

Categories
chicken_research robots

Existing Chicken Robots

Found that Wageningen University in the Netherlands has a PoultryBot.

A bunch more here: https://www.denbow.com/8-digital-technologies-poultry-producers/

Obviously they’re all for industry. China and Thailand apparently has ‘robot nannies‘ monitoring chicken health. There’s also some robots for feeding chickens, and for keeping them on their toes, by driving into them

PoulBot is a bit closer to the magic side, studying social ecology, with a robot for chicks to imprint on, who takes chicks around, and counts the chicks and goes back for any left behind. It beeps instead of clucks. The above-head camera is probably not realistic for most settings, but it allows a useful behaviour they call avoid-running-over-chick. The idea is to work out what cues are required for chicks to accept a robot as a mother.

Categories
dev Locomotion The Sentient Table

Spinning Up

OpenAI Spinning Up https://spinningup.openai.com/en/latest/

So I’ve got it working, OpenAI’s PPO. https://github.com/openai/spinningup/issues/142 – Needs a workaround to run your own envs.

But I can’t work out how to increase the exploration “factor”.

It’s some sort of application of a Gaussian distribution of noise, I think, which is the simple idea.

It looks like clipratio is maybe what i need.

hmm but ppo don’t want it.

parser.add_argument('--env', type=str, default='HalfCheetah-v2')
parser.add_argument('--hid', type=int, default=64)
parser.add_argument('--l', type=int, default=2)
parser.add_argument('--gamma', type=float, default=0.99)
parser.add_argument('--seed', '-s', type=int, default=0)
parser.add_argument('--cpu', type=int, default=4)
parser.add_argument('--steps', type=int, default=4000)
parser.add_argument('--epochs', type=int, default=50)
parser.add_argument('--exp_name', type=str, default='ppo')


def ppo(env_fn, actor_critic=core.mlp_actor_critic, ac_kwargs=dict(), seed=0,
        steps_per_epoch=4000, epochs=50, gamma=0.99, 


clip_ratio=0.2, 


pi_lr=3e-4,
        vf_lr=1e-3, train_pi_iters=80, train_v_iters=80, lam=0.97, max_ep_len=1000,
        target_kl=0.01, logger_kwargs=dict(), save_freq=10):

https://github.com/openai/spinningup/issues/12

ok so i tried SAC algo too, and the issue i have now is

(, AttributeError(“‘list’ object has no attribute ‘reshape’”,), )

So the thing is the dimensionality

“FetchReach environment has Dict observation space (because it packages not only arm position, but also the target location into the observation), and spinning up does not implement support for Dict observation spaces yet. One thing you can do is add a FlattenDictWrapper from gym (for example usage see, for instance,

env = FlattenDictWrapper(env, [‘observation’, ‘desired_goal’])

Spinning Up implementations currently only support envs with Box observation spaces (where observations are real-valued vectors). These environments have Dict observation spaces, so each obs is a dict of (key, vector) pairs. If you want to test things out in these envs, I recommend doing it as a hacking project! 🙂 “

Categories
AI/ML deep

ML/DL COMPENDIUMS

Noteworthy compilation of resources relevant to Dr. Ori Cohen’s work:

https://docs.google.com/document/d/1wvtcwc8LOb3PZI9huQOD7UjqUoY98N5r3aQsWKNAlzk/edit#

Gilbert Tanner:

https://gilberttanner.com/

Eugene Yan:

https://github.com/eugeneyan/applied-ml

Categories
AI/ML Vision

Simple Object Detection

Looks like you can just run Tensorflow.js with COCO really easily.

import * as cocoSsd from "@tensorflow-models/coco-ssd";

const image = document.getElementById("image")

cocoSsd.load()
  .then(model => model.detect(image))
  .then(predictions => console.log(predictions))

But then you just get the categories COCO was trained on.  Bird.

Categories
AI/ML

TuriCreate

Apple also has an ML framework:

https://github.com/apple/turicreate

Turi Create simplifies the development of custom machine learning models. You don’t have to be a machine learning expert to add recommendations, object detection, image classification, image similarity or activity classification to your app.

  • Easy-to-use: Focus on tasks instead of algorithms
  • Visual: Built-in, streaming visualizations to explore your data
  • Flexible: Supports text, images, audio, video and sensor data
  • Fast and Scalable: Work with large datasets on a single machine
  • Ready To Deploy: Export models to Core ML for use in iOS, macOS, watchOS, and tvOS apps

With Turi Create, you can accomplish many common ML tasks:

ML TaskDescription
RecommenderPersonalize choices for users
Image ClassificationLabel images
Drawing ClassificationRecognize Pencil/Touch Drawings and Gestures
Sound ClassificationClassify sounds
Object DetectionRecognize objects within images
One Shot Object DetectionRecognize 2D objects within images using a single example
Style TransferStylize images
Activity ClassificationDetect an activity using sensors
Image SimilarityFind similar images
ClassifiersPredict a label
RegressionPredict numeric values
ClusteringGroup similar datapoints together
Text ClassifierAnalyze sentiment of messages
Categories
AI/ML

Localised Narratives

https://google.github.io/localized-narratives/

One of Google’s recent projects, “image annotations connecting vision and language”.

It’s one of the features of the new Open Images v6 dataset.

Sounds like a good behaviour detection mechanism.