Categories
meta MFRU

Artwork Feedback

Slovenian philosopher, Maja Pan, has written an interesting article on our work, at https://www.animot-vegan.com/tehnologizacija-skrbi/

I’ll need to reread it a few times, and think about it, to address specifics.

But for first remarks, just to address the most general concerns:

We’re glad that the artwork is provoking some critical discursion.

After MFRU, the five chickens went on to live a free range life, with a spacious fox-proof coop that we built, in the Maribor hills, with a family that looks after them now. We’ve received (obviously subjective) reports that the chickens are doing well, and are happy.

Philosophically, I think we don’t see improved welfare as counter-productive to abolitionist ideology.

But there are some 40 billion chickens in intensive (factory) farms. So there is, in reality, much room for improvement.

That is not to say we entirely disagree with abolitionist goals: it would probably be ethical to bring about the extinction of the modern broiler chicken.

But at least, personally, I see abolitionism as unrealistic, when juxtaposed with the magnitude of the intensive poultry farming industry, in 2022, and misguided, if suggesting that improving welfare is a bad idea, because it acts as a balm to soothe the unthinking consumer’s conscience, allowing them to continue buying into fundamentally unethical practices. It’s an interesting idea, but people still need to make their choices, with individual responsibility. The hope in a silver bullet legal solution, like the EU outright banning animal farming on ethical grounds is wishful thinking, because of the economic impact. That is also not to say that it’s not possible – one suggestion at the discussion was to subsidize transition of poultry farming to hemp farming, which is capable of producing similar levels of protein, and profit, assuming those are the underlying basic goals of the industry.

Our ongoing collaboration with the poultry scientist, Dr. Brus, is to develop statistical, or machine-learned comparative welfare indicators, based on audio recordings. But it is not done to justify existing industrial practice. It is done to encourage its improvement, by enabling differentiable rankings of environments. Widespread application of objective welfare rankings would likely have the effect of artificial selection for more ‘humane’ treatment of animals, as farms seek to improve their scores. It’s not the binary ethics of abolitionist veganism, but it is at least an idea, to open the ethical gradient between zero and one, for the estimated 99% of the planet who are not vegan. It’s hard to argue with veganism, because it’s correct. But reality is something else.

Anyway, those are my initial thoughts.

Categories
doin a heckin inspire highly_speculative meta

Course Design by Prototype Release

I think my CSC 369 class at Cal Poly was a good model, for a type of autonomous course design. There were only 9 students, and there was a presentation of each person’s project, near the end of the course. What made it a bit more interesting, is that the code was shared at the end, and some students incorporated other students’ code into their programs.

There was some remixing. Because we all worked on different things, (I made ‘Gnapster’, a P2P file sharing app with firewall circumvention, while others worked on MD5 encryption, caching and multiple source downloading.)

My thought was that the development of the GGR robots has been towards something simple and accessible, in order to be sure we get *something* working, in time. I have prototypes, and a UI now, and I add features to the robot app, building it up into something workable. Making small improvements.

Despite a couple years now, of playing with AI and ML and CV and robots, etc., I’m still way behind the curve. The project budget is my salary for maybe three months of work in industry (Software engineering). In terms of R&D, we’re way past three months of work. But the scope and pace of the fields, of AI, ML, CV, robotics, are vast, and fast.

In theory, theory and practice are the same. In practice, they are not.” – Albert Einstein

Entire months down a rabbit hole can end with a negative result. Time flies. I felt I needed to start aiming lower, at some point, when the reality hit. Miranda and I are both good at time crunches. So I know we’ll have something presentable, as we get close to the deadline. But there’s only so much one can get done, in a certain amount of time, given the constraints of life. Work, responsibility, friends, family, health., etc.

So my thought was more or less, platforming, open source style, for the sake of remixing. Course design based on remixing the contributions of all the participants, who start with the result of previous iterations.

A major issue with this idea, is version control, because if the remix is done like a software engineering project, merging contributions will bite students. There was always a rush, at some companies I worked at, to get your code in first, to avoid being the one who had to update and merge before committing their code.

There’s ways around this issue though. Mob programming with an expert at the helm is probably the best way to merge code.

So, idea was, like a course or workshop, where people learn to do something – and then they all work on different, individual improvements, on that something, and then the merged improvements become the new prototype, upon which future coursework might be built.

Sometimes, you might just want to scrap the janky mess it’s becoming, and use a different paradigm, entirely. But sometimes you don’t have time for a more elegant paradigm.

Categories
AI/ML deep dev envs meta

Codex

I applied for the OpenAI Codex beta, and tried it out. It’s very cool.

It’s like GPT trained on Github or something like that, so you can just give it a prompt, and it tries to carry on the sentence with what it expects next.

So the easy way to use it, I’ve found, is to write a comment for the code you want it to write for you. I switched the engine to ‘davinci-codex’ and cranked up the response length to 1600 or so. Sometimes it just makes up more comments, or does something silly. But now and then it hits a home run.

Here’s an example.

"""
Take an example wav file, found at file_name="/home/jovyan/work/307.wav", and plot a mel spectrogram of it. 
""" 

import librosa
import librosa.display
import matplotlib.pyplot as plt

file_name="/home/jovyan/work/307.wav"

y, sr = librosa.load(file_name)

plt.figure(figsize=(12, 4))
librosa.display.waveplot(y, sr=sr)
plt.show()

S = librosa.feature.melspectrogram(y, sr=sr, n_mels=128)
log_S = librosa.power_to_db(S, ref=np.max)
plt.figure(figsize=(12, 4))
librosa.display.specshow(log_S, sr=sr, x_axis='time', y_axis='mel')
plt.title('mel power spectrogram')
plt.colorbar(format='%+02.0f dB')
plt.tight_layout()
plt.show()

I had to install librosa, but then it worked. It did more than I asked for.

Then I fed that whole thing, with another

"""
Plot the MFCC of file_name="/home/jovyan/work/307.wav"
"""

and it carried like an example giving champ:

"""
Take an example wav file, found at file_name="/home/jovyan/work/307.wav", and plot a mel spectrogram of it. 
""" 

import librosa
import librosa.display
import matplotlib.pyplot as plt

file_name="/home/jovyan/work/307.wav"

y, sr = librosa.load(file_name)

plt.figure(figsize=(12, 4))
librosa.display.waveplot(y, sr=sr)
plt.show()

S = librosa.feature.melspectrogram(y, sr=sr, n_mels=128)
log_S = librosa.power_to_db(S, ref=np.max)
plt.figure(figsize=(12, 4))
librosa.display.specshow(log_S, sr=sr, x_axis='time', y_axis='mel')
plt.title('mel power spectrogram')
plt.colorbar(format='%+02.0f dB')
plt.tight_layout()
plt.show()




"""
Plot the MFCC of file_name="/home/jovyan/work/307.wav"
"""

import librosa
import librosa.display
import matplotlib.pyplot as plt

file_name="/home/jovyan/work/307.wav"

y, sr = librosa.load(file_name)

plt.figure(figsize=(12, 4))
librosa.display.waveplot(y, sr=sr)
plt.show()

mfccs = librosa.feature.mfcc(y=y, sr=sr, n_mfcc=40)
plt.figure(figsize=(12, 4))
librosa.display.specshow(mfccs, x_axis='time')
plt.colorbar()
plt.title('MFCC')
plt.tight_layout()
plt.show()


"""
Plot the chromagram of file_name="/home/jovyan/work/307.wav"
"""

import librosa
import librosa.display
import matplotlib.pyplot as plt

file_name="/home/jovyan/work/307.wav"

y, sr = librosa.load(file_name)

plt.figure(figsize=(12, 4))
librosa.display.waveplot(y, sr=sr)
plt.show()

chroma = librosa.feature.chroma_stft(y=y, sr=sr)
plt.figure(figsize=(12, 4))
librosa.display.specshow(chroma, y_axis='chroma', x_axis='time')
plt.colorbar()
plt.title('Chromagram')
plt.tight_layout()
plt.show()


"""
Plot the chromagram of file_name="/home/jovyan/work/307.wav"
"""

import librosa
import librosa.display
import matplotlib.pyplot as plt

file_name="/home/jovyan/work/307.wav"

y, sr = librosa.load(file_name)

plt.figure(figsize=(12, 4))
librosa.display.waveplot(y, sr=sr)
plt.show()

chroma = librosa.feature.chroma_stft(y=y, sr=sr)
plt.figure(figsize=(12, 4))
librosa.display.specshow(chroma, y_axis='chroma', x_axis='time')
plt.colorbar()
plt.title('Chromagram')
plt.tight_layout()
plt.show()


"""
Plot the tonnetz of file_name="/home/jovyan/work/307.wav"
"""

import librosa
import librosa.display
import matplotlib.pyplot as plt

file_name="/home/jovyan/work/307.wav"

y, sr = librosa.load(file_name)

plt.figure(figsize=(12, 4))
librosa.display.waveplot(y, sr=sr)
plt.show()

tonnetz = librosa.feature.tonnetz(y=y, sr=sr)
plt.figure(figsize=(12, 4))
librosa.display.specshow(tonnetz, y_axis='tonnetz', x_axis='time')
plt.colorbar()
plt.title('Tonal Centroids (Tonnetz)')
plt.tight_layout()
plt.show()


"""
Plot the zero-crossing rate of file_name="/home/jovyan/work/307.wav"
"""

import librosa
import librosa.display
import matplotlib.pyplot as plt

file_name="/home/jovyan/work/307.wav"

y, sr = librosa.load(file_name)

plt.figure(figsize=(12, 4))
librosa.display.waveplot(y, sr=sr)
plt.show()

zrate = librosa.feature.zero_crossing_rate(y)
plt.figure(figsize=(12, 4))
librosa.display.specshow(zrate, x_axis='time')
plt.colorbar()
plt.title('Zero Crossing Rate')
plt.tight_layout()
plt.show()


"""
Plot the spectral centroid of file_name="/home/jovyan/work/307.wav"
"""

import librosa
import librosa.display
import matplotlib.pyplot as plt

file_name="/home/jovyan/work/307.wav"

y, sr = librosa.load(file_name)

plt.figure(figsize=(12, 4))
librosa.display.waveplot(y, sr=sr)
plt.show()

cent = librosa.feature.spectral_centroid(y=y, sr=sr)
plt.figure(figsize=(12, 4))
librosa.display.specshow(cent, x_axis='time')
plt.colorbar()
plt.title('Spectral Centroids')
plt.tight_layout()
plt.show()


"""
Plot the spectral bandwidth of file_name="/home/jovyan/work/307.wav"
"""

import librosa
import librosa.display
import matplotlib.pyplot as plt

file_name="/home/jovyan/work/307.wav"

y, sr = librosa.load(file_name)

plt.figure(figsize=(12, 4))
librosa.display.waveplot(y, sr=sr)
plt.show()

spec_bw = librosa.feature.spectral_bandwidth(y=y, sr=sr)
plt.figure(figsize=(12, 4))
librosa.display.specshow(spec_bw, x_axis='time')
plt.colorbar()
plt.title('Spectral Bandwidth')
plt.tight_layout()
plt.show()


"""
Plot the spectral rolloff of file_name="/home/jovyan/work/307.wav"
"""

import librosa
import librosa.display
import matplotlib.pyplot as plt

file_name="/home/jovyan/work/307.wav"

y, sr = librosa.load(file_name)

plt.figure(figsize=(12, 4))
librosa.display.waveplot(y, sr=sr)
plt.show()

spec_rolloff = lib

After waiting a while… we get:

Ok, Chromagram twice… bold move, Codex.
import librosa
import librosa.display
import matplotlib.pyplot as plt

file_name="/home/jovyan/work/307.wav"

y, sr = librosa.load(file_name)

plt.figure(figsize=(12, 8))
D = librosa.amplitude_to_db(np.abs(librosa.stft(y)), ref=np.max)
plt.subplot(4, 2, 1)
librosa.display.specshow(D, y_axis='linear')
plt.colorbar(format='%+2.0f dB')
plt.title('Linear-frequency power spectrogram')

plt.subplot(4, 2, 2)
librosa.display.specshow(D, y_axis='log')
plt.colorbar(format='%+2.0f dB')
plt.title('Log-frequency power spectrogram')

plt.subplot(4, 2, 3)
librosa.display.specshow(D, x_axis='time')
plt.colorbar(format='%+2.0f dB')
plt.title('Time-frequency power spectrogram')

plt.subplot(4, 2, 4)
librosa.display.specshow(D, x_axis='time', y_axis='log')
plt.colorbar(format='%+2.0f dB')
plt.title('Log-Time-frequency power spectrogram')
Categories
highly_speculative meta UI

“Mechanical Turk”ing

Audience participation could adds data points and labels, for classification training or similar. But what?

Classification needs a user interface. I saw one here:

Informatics 06 00038 g004 550
Collecting Labels for Rare Anomalies via Direct Human Feedback—An Industrial Application Study

“What type of anomaly is this?”

Informatics 06 00038 g001 550
Reporting an Anomaly

Here is Miranda demonstrating a similar skill

Categories
evolution highly speculative highly_speculative meta The Chicken Experience

Ameglian Major Cow, Chairdogs, etc.

I thought it’s probably worth noting some sci-fi ideas that I remember…

The Ameglian Major Cow (or, Dish of the Day), was the cow in Hitchhiker’s guide book 2, (Restaurant at the end of the Universe), that has been bred to want to be eaten, and when Zaphod Beeblebrox, etc., order steak, the cow goes off to shoot itself, and tells the protagonist, Arthur not to worry: “I’ll be very humane.”

The other thing was chairdogs, that show up in one of the later Dune books. The Tleilaxu are known for taking genetic engineering a bit too far, and one of their exports is a dog that’s been bred until it has become a chair, which massages you. Real ‘creature comforts’. It’s generally used for world building and character development, maybe insinuating that the characters with guilty-pleasure chairdogs, are getting soft.

Interesting, because the artificial selection or genetic engineering leading to these creations is questionable, but the final product is something that inverts or sidesteps morality.

The Ameglian Major Cow is a common thought experiment, as it postulates a question to vegetarians, whether they would eat meat, if it were intelligent, and wanted to be eaten. It’s very hypothetical though.

Chairdogs are closer: if chickens of tomorrow, or other domesticated animals, are ultimately evolved or engineered into simplified protein vats, their ‘souls’ (i.e. CNS) removed, perhaps we’re left with something less problematic, despite the apparent abominating of nature.

Side note: Alex O’Connor, Cosmic Skeptic, has a lot to say, on the philosophy of ethical veganism. Here he answers a question: “Do Animals have a “Right to Life?” – (tl;dw: no. but you should eat less meat)

Categories
Behaviour envs meta simulation

Animal-AI 2.0

Like metaworld, but 900 tasks, and with Unity mappings http://animalaiolympics.com/AAI/

github: https://github.com/beyretb/AnimalAI-Olympics

“The Animal-AI Olympics was built using Unity’s ML-Agents Toolkit.

The Python library located in animalai extends ml-agents v0.15.0. Mainly, we add the possibility to change the configuration of arenas between episodes.”

To get an idea of the experiments: http://animalaiolympics.com/AAI/testbed

They had a competition of ‘animal AIs’ in 2019, using EvalAI:

EvalAI

The competition was kindly hosted on EvalAI, an open source web application for AI competitions. Special thanks to Rishabh Jain for his help in setting this up. We will aim to reopen submissions with new hidden files in order to keep some form of competition going.

Deshraj Yadav, Rishabh Jain, Harsh Agrawal, Prithvijit Chattopadhyay, Taranjeet Singh, Akash Jain, Shiv Baran Singh, Stefan Lee and Dhruv Batra (2019) EvalAI: Towards Better Evaluation Systems for AI Agents

arxiv: https://arxiv.org/pdf/1902.03570.pdf

Categories
AI/ML meta

Meta-learning & MAML

For like, having fall-back plans when things go wrong. Or like, phasing between policies, so you don’t “drop the ball”

https://arxiv.org/abs/1703.03400

Reminds me of Map-Elites, in that it collects behaviours.

“We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.”

Mostly based around the ES algorithm, they got a robot to walk straight again soon after hobbling it. https://ai.googleblog.com/2020/04/exploring-evolutionary-meta-learning-in.html

https://arxiv.org/pdf/2003.01239.pdf

“we present an evolutionary meta-learning algorithm
that enables locomotion policies to quickly adapt in noisy
real world scenarios. The core idea is to develop an efficient
and noise-tolerant adaptation operator, and integrate it into
meta-learning frameworks. We have shown that this Batch
Hill-Climbing operator works better in handling noise than
simply averaging rewards over multiple runs. Our algorithm
has achieved greater adaptation performance than the stateof-the-art MAML algorithms that are based on policy gradient. Finally, we validate our method on a real quadruped
robot. Trained in simulation, the locomotion policies can
successfully adapt to two real-world robot environments,
whose dynamics have been drastically changed.

In the future, we plan to extend our method in several
ways. First, we believe that we can replace the Gaussian
perturbations in the evolutionary algorithm with non-isotropic
samples to further improve the sample efficiency during
adaptation. With less robot data required for adaptation, we
plan to develop a lifelong learning system, in which the
robot can continuously collect data and quickly adjust its
policy to learn new skills and to operate optimally in new
environments
.”

Categories
meta

Musing on audio/visual/motor brain

Just some notes to myself. We’re going to be doing some advanced shitty robots here, with Sim-To-Real policy transfer.

ENSEMBLE NNs

I had a look at merging NNs, and found this https://machinelearningmastery.com/ensemble-methods-for-deep-learning-neural-networks/ with this link as one of the most recent articles: https://arxiv.org/abs/1803.05407 – It recommends using averages of multiple NNs.

AUDIO

For audio there’s https://github.com/google-coral/project-keyword-spotter which uses a 72 euro TPU https://coral.ai/products/accelerator/ for fast processing.

I’ve seen convolution network style NNs on spectrograms of audio (eg https://medium.com/gradientcrescent/urban-sound-classification-using-convolutional-neural-networks-with-keras-theory-and-486e92785df4) Anyway, it’s secondary. We can have it work with an mic with a volume threshold to start with.

MOTION

Various neural networks will be trained in simulation, to perform different tasks, with egg and chicken and human looking objects. Ideally we develop a robot that can’t really fall over.

We need to decide whether we’re giving it spacial awareness in 3d, using point clouds maybe? Creating mental maps of the environment?

VISION

Convolution networks are typical for vision tasks. We can however use HyperNEAT for visual discrimination, here: https://github.com/PacktPublishing/Hands-on-Neuroevolution-with-Python/tree/master/Chapter7

But what will make sense is to have the RPi take pics, send them across to a server on a desktop computer, play around with the image in OpenCV first, and then feed that to the neuro-evolution process.