Categories
deep hardware_ institutes

SLIDE

CPU beating GPU. Lol NVIDIA. SELL! SELL!

https://news.rice.edu/2020/03/02/deep-learning-rethink-overcomes-major-obstacle-in-ai-industry/

arxiv: https://arxiv.org/pdf/1903.03129.pdf

Conclusion:


We provide the first evidence that a smart algorithm with
modest CPU OpenMP parallelism can outperform the best
available hardware NVIDIA-V100, for training large deep
learning architectures
. Our system SLIDE is a combination
of carefully tailored randomized hashing algorithms with
the right data structures that allow asynchronous parallelism.
We show up to 3.5x gain against TF-GPU and 10x gain
against TF-CPU in training time with similar precision on
popular extreme classification datasets. Our next steps are to
extend SLIDE to include convolutional layers. SLIDE has
unique benefits when it comes to random memory accesses
and parallelism. We anticipate that a distributed implementation of SLIDE would be very appealing because the
communication costs are minimal due to sparse gradients.

Categories
AI/ML deep GANs institutes

DeepAI APIs

https://deepai.org/apis

I made this at https://deepai.org/machine-learning-model/fast-style-transfer

Hehe cool.

There’s a lot of them. Heh Parsey McParseface API https://deepai.org/machine-learning-model/parseymcparseface

[
    {
        "tree": {
            "ROOT": [
                {
                    "index": 1,
                    "token": "What",
                    "tree": {
                        "cop": [
                            {
                                "index": 2,
                                "token": "is",
                                "pos": "VBZ",
                                "label": "VERB"
                            }
                        ],
                        "nsubj": [
                            {
                                "index": 4,
                                "token": "meaning",
                                "tree": {
                                    "det": [
                                        {
                                            "index": 3,
                                            "token": "the",
                                            "pos": "DT",
                                            "label": "DET"
                                        }
                                    ],
                                    "prep": [
                                        {
                                            "index": 5,
                                            "token": "of",
                                            "tree": {
                                                "pobj": [
                                                    {
                                                        "index": 6,
                                                        "token": "this",
                                                        "pos": "DT",
                                                        "label": "DET"
                                                    }
                                                ]
                                            },
                                            "pos": "IN",
                                            "label": "ADP"
                                        }
                                    ]
                                },
                                "pos": "NN",
                                "label": "NOUN"
                            }
                        ],
                        "punct": [
                            {
                                "index": 7,
                                "token": "?",
                                "pos": ".",
                                "label": "."
                            }
                        ]
                    },
                    "pos": "WP",
                    "label": "PRON"
                }
            ]
        },
        "sentence": "What is the meaning of this?"
    }
]

Some curated research too, https://deepai.org/research – one article https://arxiv.org/pdf/2007.05558v1.pdf showing that deep learning is too resource intensive.

Conclusion
The explosion in computing power used for deep learning models has ended the “AI winter” and set new benchmarks for computer performance on a wide range of tasks. However, deep learning’s prodigious appetite for computing power imposes a limit on how far it can improve performance in its current form, particularly in an era when improvements in hardware performance are slowing. This article shows that the computational limits of deep learning will soon be constraining for a range of applications, making the achievement of important benchmark milestones impossible if current trajectories hold. Finally, we have discussed the likely impact of these computational limits: forcing Deep Learning towards less computationally-intensive
methods of improvement, and pushing machine learning towards techniques that are more computationally-efficient than deep learning.

Yeah, well, the neocortex has like 7 “hidden” layers, with sparse distributions, with voting / normalising layers. Just a 3d graph of neurons, doing some wiggly things.

Categories
dev institutes

Anyscale Academy

https://github.com/anyscale/academy

They have a relevant tutorial on RLLib (Ray)

Categories
AI/ML institutes

DeepMind

Somehow just been following OpenAI and missed all the action at the other big algorithm R&D company. https://deepmind.com/research

Experience Replay: https://deepmind.com/research/open-source/Reverb

https://deepmind.com/research/open-source/Acme_os Their new framework; https://github.com/deepmind/acme

https://github.com/deepmind/dm_control – seems they’re a Mujoco house.

Categories
arxiv institutes

paperswithcode

Somehow didn’t find this until now, but it divides papers into categories, within machine learning topics/tasks

https://paperswithcode.com/ https://paperswithcode.com/sota

ooh https://github.com/CorentinJ/Real-Time-Voice-Cloning

https://medium.com/paperswithcode/a-home-for-results-in-ml-e25681c598dc

Categories
AI/ML CNNs deep dev institutes

wandb

https://app.wandb.ai/gabesmed/examples-tf-estimator-mnist/runs/98nmh0vy/tensorboard?workspace=user-

hope that works. It’s that guy on youtube who says ‘dear scholars’ and ‘what a time to be alive’.

Advertising was: Lambda GPU clouds, $20 for imagenet training, no setup required. Good to know.

looks like a nice UI for stuff : https://www.wandb.com/articles

Categories
evolution institutes

MIT Press

https://www.mitpressjournals.org/doi/full/10.1162/ARTL_a_00210 this was such a good find, but the rest of their site wasn’t cooperating

http://cognet.mit.edu/journals/evolutionary-computation/28/1 it would be cool if i could view these pdfs. SA IP range banned 🙁

Categories
AI/ML institutes neuro

Numenta

https://grokstream.com/grok-complete-aiops-solutions/