Soft Tensegrity Robots, jiggling around:
https://www.youtube.com/watch?v=SuLQDhrk9tQ
“Neat” – Youtube comment
Soft Tensegrity Robots, jiggling around:
https://www.youtube.com/watch?v=SuLQDhrk9tQ
“Neat” – Youtube comment
another phd collab thing, on European Research Council grant https://www.resibots.eu/videos.html 2015-2020. Nice. They’re the ones who developed MAP-elites https://arxiv.org/abs/1504.04909
They https://members.loria.fr/JBMouret/nature_press.html had a paper published in Nature, for their bots that fix themselves.
MAP-Elites is interesting. It categorises behaviours and tests local optima, of some sort variables. Haven’t read the paper yet. It is windy.
“It creates a map of high-performing solutions at each point in a space defined by dimensions of variation that a user gets to choose. This Multi-dimensional Archive of Phenotypic Elites (MAP-Elites) algorithm illuminates search spaces, allowing researchers to understand how interesting attributes of solutions combine to affect performance, either positively or, equally of interest, negatively. “
https://github.com/google-research/football
Google made an openAI gym environment for playing football.
It looks better than FIFA.
Someone collected all the cat related AI papers: https://github.com/junyanz/CatPapers http://people.csail.mit.edu/junyanz/cat/cat_papers.html
Currently we have LSD-SLAM working, and that’s cool for us humans to see stuff, but having an object mesh to work with makes more sense. I don’t know if there’s really any difference, but at least in terms of simulator integration, this makes sense. I’m thinking, there’s object detection, semantic segmentation, etc, etc, and in the end, I want the robot to have a relative coordinate system, in a way. But robots will probably get by with just pixels and stochastic magic.
But the big idea for me, here, is transform monocular camera images into mesh objects. Those .obj files or whatever, could be imported into the physics engine, for training in simulation.
arxiv: https://arxiv.org/pdf/1809.05910v2.pdf
github: https://ranahanocka.github.io/MeshCNN/
The PhD candidate: https://www.cs.tau.ac.il/~hanocka/ – In the Q&A at the end, she mentions AtlasNet https://arxiv.org/abs/1802.05384 as only being able to address local structures. Latest research looks interesting too https://arxiv.org/pdf/2003.13326.pdf
ShapeNET https://arxiv.org/abs/1512.03012 seems to be a common resource, and https://arxiv.org/pdf/2004.15004v2.pdf and these obj files might be interesting https://www.dropbox.com/s/w16st84r6wc57u7/shrec_16.tar.gz
Somehow didn’t find this until now, but it divides papers into categories, within machine learning topics/tasks
https://paperswithcode.com/ https://paperswithcode.com/sota
ooh https://github.com/CorentinJ/Real-Time-Voice-Cloning
https://medium.com/paperswithcode/a-home-for-results-in-ml-e25681c598dc
https://app.wandb.ai/gabesmed/examples-tf-estimator-mnist/runs/98nmh0vy/tensorboard?workspace=user-
hope that works. It’s that guy on youtube who says ‘dear scholars’ and ‘what a time to be alive’.
Advertising was: Lambda GPU clouds, $20 for imagenet training, no setup required. Good to know.
looks like a nice UI for stuff : https://www.wandb.com/articles
Using Simulation and Domain Adaptation to Improve
Efficiency of Deep Robotic Grasping:
https://arxiv.org/pdf/1805.07831.pdf
Optimizing Simulations with Noise-Tolerant Structured Exploration: https://arxiv.org/pdf/1805.07831.pdf
The ‘magic’ underlying PyTorch https://towardsdatascience.com/pytorch-autograd-understanding-the-heart-of-pytorchs-magic-2686cd94ec95
:That is true. As I wrote earlier, PyTorch is a jacobian-vector product engine. In the process it never explicitly constructs the whole Jacobian. It’s usually simpler and more efficient to compute the JVP directly.:
Source: https://www.cs.toronto.edu/~rgrosse/courses/csc321_2018/slides/lec10.pdf
jacobian-vector products?
This is trippy shit. https://jukebox.openai.com/
AI music https://openai.com/blog/jukebox/