https://flow-project.github.io/index.html
Seems like a framework with some sort of traffic environments
https://flow-project.github.io/index.html
Seems like a framework with some sort of traffic environments
https://docs.ray.io/en/master/rllib-algorithms.html Seems like this might be the most up to date baselines repo.
Ray is a fast and simple framework for building and running distributed applications.
Ray is packaged with the following libraries for accelerating machine learning workloads:
ARS implementation: https://github.com/ray-project/ray/blob/master/rllib/agents/ars/ars.py
arxiv: https://arxiv.org/pdf/1503.02531.pdf
Slides: https://www.ttic.edu/dl/dark14.pdf
What is George Hinton’s Dark Knowledge?
Distilling the knowledge in an ensemble of models into a single model.
It was based on the ‘model compression’ paper of Rich Caruana http://www.cs.cornell.edu/~caruana/
http://www.cs.cornell.edu/~caruana/compression.kdd06.pdf
There is a distinction between hard and soft targets, when you train a smaller network to get the same results as a bigger network… If you train the smaller network based on a cost function minimizing the difference from the original larger network’s results, you lose some knowledge that was encoded in the ‘softer targets’. By changing the softmax function at the end of the classification network, it’s possible to take into account how likely a class is to be mistaken for the other classes.
Someone collected all the cat related AI papers: https://github.com/junyanz/CatPapers http://people.csail.mit.edu/junyanz/cat/cat_papers.html
Currently we have LSD-SLAM working, and that’s cool for us humans to see stuff, but having an object mesh to work with makes more sense. I don’t know if there’s really any difference, but at least in terms of simulator integration, this makes sense. I’m thinking, there’s object detection, semantic segmentation, etc, etc, and in the end, I want the robot to have a relative coordinate system, in a way. But robots will probably get by with just pixels and stochastic magic.
But the big idea for me, here, is transform monocular camera images into mesh objects. Those .obj files or whatever, could be imported into the physics engine, for training in simulation.
arxiv: https://arxiv.org/pdf/1809.05910v2.pdf
github: https://ranahanocka.github.io/MeshCNN/
The PhD candidate: https://www.cs.tau.ac.il/~hanocka/ – In the Q&A at the end, she mentions AtlasNet https://arxiv.org/abs/1802.05384 as only being able to address local structures. Latest research looks interesting too https://arxiv.org/pdf/2003.13326.pdf
ShapeNET https://arxiv.org/abs/1512.03012 seems to be a common resource, and https://arxiv.org/pdf/2004.15004v2.pdf and these obj files might be interesting https://www.dropbox.com/s/w16st84r6wc57u7/shrec_16.tar.gz
https://app.wandb.ai/gabesmed/examples-tf-estimator-mnist/runs/98nmh0vy/tensorboard?workspace=user-
hope that works. It’s that guy on youtube who says ‘dear scholars’ and ‘what a time to be alive’.
Advertising was: Lambda GPU clouds, $20 for imagenet training, no setup required. Good to know.
looks like a nice UI for stuff : https://www.wandb.com/articles
https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/2041-210X.13103
Contest in 2017 for bird call neural nets. Best ones used mel spectrogram of the audio with convolutional neural networks.
The winning algorithm’s code is available: https://jobim.ofai.at/gitlab/gr/bird_audio_detection_challenge_2017/tree/master
https://deepai.org/machine-learning-glossary-and-terms/hyperparameter
“A hyperparameter is a parameter that is set before the learning process begins. These parameters are tunable and can directly affect how well a model trains.”
There’s a few hyperparameter optimizers:
http://hyperopt.github.io/hyperopt/
(Ended up using Tune, in the Ray framework)