Arxiv Insights
Arxiv Insights
  • 13
  • 2 310 535
AlphaFold and the Grand Challenge to solve protein folding
If you want to support this channel, here is my patreon link:
patreon.com/ArxivInsights --- You are amazing!! ;)
If you have questions you would like to discuss with me personally, you can book a 1-on-1 video call through Pensight: pensight.com/x/xander-steenbrugge
--------------------------------
AlphaFold is DeepMinds latest breakthrough addressing the protein folding problem. Using an advanced Deep Learning architecture that achieves end-to-end learning of protein structures, this work is arguably one of the most influential papers of this decade and is likely to spark enormous advanced in computational biology and protein design. This video covers the entire architecture of the model as well as training principles that led to the incredible results of AlphaFold2!
AlphaFold Nature paper: www.nature.com/articles/s41586-021-03828-1
AlphaFold Codebase: github.com/deepmind/alphafold
Work from the Baker lab: www.bakerlab.org/
Fabian Fuchs' amazing blog on equivariance: fabianfuchsml.github.io/alphafold2/
Ongoing Open Source effort to reproduce AlphaFold: github.com/lucidrains/alphafold2
::Chapters::
00:00 Intro
02:28 The Protein Folding Problem
05:29 AlphaFold1 revisited
06:10 Multiple Sequence Alignments (MSA)
08:10 Distograms
12:29 AlphaFold2
14:52 The Evoformer
19:07 The Structure Module
28:13 Zooming out: looking at the future
Переглядів: 59 940

Відео

The Molecular Basis of Life
Переглядів 18 тис.2 роки тому
If you want to support this channel, here is my patreon link: patreon.com/ArxivInsights You are amazing!! ;) If you have questions you would like to discuss with me personally, you can book a 1-on-1 video call through Pensight: pensight.com/x/xander-steenbrugge Life is a molecular marvel of astounding complexity. In this video we take a dive into the world of molecular engines, proteins and the...
Editing Faces using Artificial Intelligence
Переглядів 370 тис.4 роки тому
Link to Notebooks: drive.google.com/open?id=1LBWcmnUPoHDeaYlRiHokGyjywIdyhAQb Link to the StyleGAN paper: arxiv.org/abs/1812.04948 Link to GAN blogpost: hunterheidenreich.com/blog/gan-objective-functions/ If you want to support this channel, here is my patreon link: patreon.com/ArxivInsights You are amazing!! ;) If you have questions you would like to discuss with me personally, you can book a ...
'How neural networks learn' - Part III: Generalization and Overfitting
Переглядів 42 тис.5 років тому
In this third episode on "How neural nets learn" I dive into a bunch of academical research that tries to explain why neural networks generalize as wel as they do. We first look at the remarkable capability of DNNs to simply memorize huge amounts of (random) data. We then see how this picture is more subtle when training on real data and finally dive into some beautiful analysis from the viewpo...
An introduction to Policy Gradient methods - Deep Reinforcement Learning
Переглядів 187 тис.5 років тому
In this episode I introduce Policy Gradient methods for Deep Reinforcement Learning. After a general overview, I dive into Proximal Policy Optimization: an algorithm designed at OpenAI that tries to find a balance between sample efficiency and code complexity. PPO is the algorithm used to train the OpenAI Five system and is also used in a wide range of other challenges like Atari and robotic co...
OpenAI Five: When AI beats professional gamers
Переглядів 25 тис.5 років тому
In this episode I discuss OpenAI Five, a Machine Learning system that was able to defeat professional gamers in the popular video game Dota 2: - How was the system built? - What does this mean for AI progress? - What real world applications can be built on this succes? You can find all the OpenAI blogposts here: blog.openai.com/ If you enjoy my videos, all support is super welcome! www.patreon....
Reinforcement Learning with sparse rewards
Переглядів 114 тис.5 років тому
In this video I dive into three advanced papers that addres the problem of the sparse reward setting in Deep Reinforcement Learning and pose interesting research directions for mastering unsupervised learning in autonomous agents. Papers discussed: Reinforcement Learning with Unsupervised Auxiliary Tasks - DeepMind: arxiv.org/abs/1611.05397 Curiosity Driven Exploration - UC Berkeley: arxiv.org/...
An introduction to Reinforcement Learning
Переглядів 639 тис.6 років тому
This episode gives a general introduction into the field of Reinforcement Learning: - High level description of the field - Policy gradients - Biggest challenges (sparse rewards, reward shaping, ...) This video forms the basis for a series on RL where I will dive much deeper into technical details of state-of-the-art methods for RL. Links: - "Pong from Pixels - Karpathy": karpathy.github.io/201...
Variational Autoencoders
Переглядів 471 тис.6 років тому
In this episode, we dive into Variational Autoencoders, a class of neural networks that can learn to compress data completely unsupervised! VAE's are a very hot topic right now in unsupervised modelling of latent variables and provide a unique solution to the curse of dimensionality. This video starts with a quick intro into normal autoencoders and then goes into VAE's and disentangled beta-VAE...
'How neural networks learn' - Part II: Adversarial Examples
Переглядів 54 тис.6 років тому
In this episode we dive into the world of adversarial examples: images specifically engineered to fool neural networks into making completely wrong decisions! Link to the first part of this series: ua-cam.com/video/McgxRxi2Jqo/v-deo.html If you want to support this channel, here is my patreon link: patreon.com/ArxivInsights You are amazing!! ;) If you have questions you would like to discuss wi...
'How neural networks learn' - Part I: Feature Visualization
Переглядів 105 тис.6 років тому
Interpreting what neural networks are doing is a tricky problem. In this video I dive into the approach of feature visualisation. From simple neuron excitation to the Deep Visualisation Toolbox and the Google DeepDream project, let's open up the black box! Links: Distill.pub post on Feature Visualisation: distill.pub/2017/feature-visualization/ Sander Dieleman post on music recommendation: bena...
Why humans learn so much faster than AI
Переглядів 49 тис.6 років тому
- Link to edited game versions: rach0012.github.io/humanRL_website/ - Link to the Paper: openreview.net/pdf?id=Hk91SGWR- "Why are humans such incredibly fast learners?" This is the core question of this paper. By leveraging powerful prior knowledge about how the world works, humans are able to quickly figure out efficient strategies in new and unseen environments. Current state-of-the-art Reinf...
AlphaGo - How AI mastered the hardest boardgame in history
Переглядів 179 тис.6 років тому
In this episode I dive into the technical details of the AlphaGo Zero paper by Google DeepMind. This AI system uses Reinforcement Learning to beat the world's Go champion using only self-play, a remarkable display of clever engineering on the path to stronger AI systems. DeepMind Blogpost: deepmind.com/blog/alphago-zero-learning-scratch/ AlphaGo Zero paper: storage.googleapis.com/deepmind-media...

КОМЕНТАРІ

  • @luxliquidlumenvideoproduct5425
    @luxliquidlumenvideoproduct5425 6 днів тому

    One must stress what you say at the end of the video at 28:20, that although AlohaFold 2.0 can predict native confirmation of an amino acid sequence, there are other contributing factors, and the algorithm isn’t able to answer the why, nor how proteins find their native state out of the vast combinatorial complexity of native confrontation structures. Levinthal’s Paradox.

  • @anishahandique4815
    @anishahandique4815 14 днів тому

    After going through most of the UA-cam videos on this topic. This one was one of the best out of all. Very clear and crisp explanation. Thank you ❤

  • @muhammadhelmy5575
    @muhammadhelmy5575 17 днів тому

    4:00

  • @tugrulz
    @tugrulz Місяць тому

    subscribed

  • @forheuristiclifeksh7836
    @forheuristiclifeksh7836 Місяць тому

    1:00

  • @bishnuprasadnayak9520
    @bishnuprasadnayak9520 Місяць тому

    Amazing

  • @conlanrios
    @conlanrios Місяць тому

    Great breakdown and links for additional resources

  • @ViewsfromVick
    @ViewsfromVick Місяць тому

    Bro! you were soo ahead of your time! Like Scooby Doo

  • @teegeevee42
    @teegeevee42 2 місяці тому

    This is so good. Thank you!

  • @noahgsolomon
    @noahgsolomon 2 місяці тому

    GOAT

  • @lamborghinicentenario2497
    @lamborghinicentenario2497 2 місяці тому

    12:28 what did you use to connect the machine learning to a 3d model?

  • @bikrammajhi3020
    @bikrammajhi3020 2 місяці тому

    This is gold!!

  • @azizbekibnhamid642
    @azizbekibnhamid642 2 місяці тому

    Great work

  • @iwanttobreakfree701
    @iwanttobreakfree701 2 місяці тому

    6 years ago and I now use this video as a guidance to understanding StableDiffusion

    • @commenterdek3241
      @commenterdek3241 Місяць тому

      an you help me out as well? I have so many questions but no one to answer them.

  • @zzewt
    @zzewt 2 місяці тому

    This is cool, but after the third random jumpscare sound I couldn't pay attention to what you were saying--all I could think about was when the next one would be. Gave up halfway through since it was stressing me out

  • @sELFhATINGiNDIAN
    @sELFhATINGiNDIAN 3 місяці тому

    this guy too hadnsome, itlain hands

  • @BooleanDisorder
    @BooleanDisorder 3 місяці тому

    Rest in peace Tishby

  • @Matthew8473
    @Matthew8473 3 місяці тому

    This is a marvel. I read a book with similar content, and it was a marvel to behold. "The Art of Saying No: Mastering Boundaries for a Fulfilling Life" by Samuel Dawn

  • @LilliHerveau
    @LilliHerveau 3 місяці тому

    feel like beta should be decreased as training progresses and the learning rate decreases too. Sounds like hyperparameter tuning though

  • @NoobsDeSroobs
    @NoobsDeSroobs 3 місяці тому

    Figuratively exploded*

  • @LuisFernandoGaido
    @LuisFernandoGaido 3 місяці тому

    Five years later and RL is a dream's product. Nothing was really solved in real world. I think there's pratical areas of IA better than that.

  • @p4k7
    @p4k7 3 місяці тому

    Great video, and the algorithm is finally recognizing it! Come back and produce more videos?

  • @user-xz6ld7nl2l
    @user-xz6ld7nl2l 3 місяці тому

    This kind of well-articulated explanation of research is a real service to the ML community. Thanks for sharing this.

  • @obensustam3574
    @obensustam3574 4 місяці тому

    Very good video

  • @erickgomez7775
    @erickgomez7775 4 місяці тому

    If you dont understand this explanation, the fault is on you.

  • @SurferDudex99
    @SurferDudex99 4 місяці тому

    Lmao this must be a joke. Anyone who supports this theory has no understanding of the exponentially nature of how AI learns.

  • @alaad1009
    @alaad1009 4 місяці тому

    Excellent video

  • @infoman6500
    @infoman6500 4 місяці тому

    Very interesting. It looks like Nature is alive -very much alive.

  • @infoman6500
    @infoman6500 4 місяці тому

    Glad to see that human biological computer network is still much efficient than machine with artificial neural network.

  • @infoman6500
    @infoman6500 4 місяці тому

    Excellent educational video on artificial and deep neural network learning.

  • @infoman6500
    @infoman6500 4 місяці тому

    Excellent video education on bio-molecular technology.

  • @alexanderkurz2409
    @alexanderkurz2409 4 місяці тому

    Another amazing video ... thanks ... any chance of some new videos coming out on recent papers?

  • @alexanderkurz2409
    @alexanderkurz2409 4 місяці тому

    5:03 "to test the presence and influence of different kinds of human priors" ... this is pretty cool ...

  • @alexanderkurz2409
    @alexanderkurz2409 4 місяці тому

    3:12 This reminds me of Chomsky's critique of AI and LLMs. Any comments?

  • @yonistoller1
    @yonistoller1 5 місяців тому

    Thanks for sharing this! I may be misunderstanding something, but it seems like there might be a mistake in the description. Specifically, the claim in 12:50 that "this is the only region where the unclipped part... has a lower value than the clipped version". I think this claim might be wrong, because there could be another case where the unclipped version would be selected: For example, if the ratio is e.g 0.5 (and we assume epsilon is 0.2), that would mean the ratio is smaller than the clipped version (which would be 0.8), and it would be selected. Is that not the case?

  • @moozzzmann
    @moozzzmann 5 місяців тому

    Great Video!! I just watched 4 hours worth of lectures, in which nothing really became clear to me, and while watching this video everything clicked! Will definitely be checking out your other work

  • @bowenjing3674
    @bowenjing3674 5 місяців тому

    I didn't forget the subscrip, but you seems to forget updating

  • @hosseinaboutalebi9998
    @hosseinaboutalebi9998 5 місяців тому

    Why have you stopped doing wonderful tutorial? I wish you would have continued your channel.

  • @kaiz6997
    @kaiz6997 5 місяців тому

    extremely amazing, thanks for creating this incredible vedio

  • @negatopoji7
    @negatopoji7 5 місяців тому

    The term "activation" in the context of neural networks generally refers to the output of a neuron, regardless of whether the network is recognizing a specific pattern. The activation is indeed a numerical value that represents the result of applying the neuron's activation function to the weighted sum of its inputs. Just posting here what ChatGPT told me, because the definition of "activation" in this video confused me

  • @davidenders9107
    @davidenders9107 5 місяців тому

    Thank you! This was comprehensive and comprehendible.

  • @berkceyhan5031
    @berkceyhan5031 6 місяців тому

    Very very good video, thank you

  • @finnweikert3430
    @finnweikert3430 6 місяців тому

    thank you sir! appreciate the effort that went into this video

  • @user-bh8xb2yy5d
    @user-bh8xb2yy5d 6 місяців тому

    THANK YOOOOOOOU I was reading that article you commented on and I couldn't understand for the life of me how they were generating those images, so tysm ;-;

  • @vinel208
    @vinel208 6 місяців тому

    ... huh?

  • @atcer51
    @atcer51 6 місяців тому

    fiiiinnnaaaallly after tons of googling, I finally fund a USEFUL video that accually EXPLAINS how to reward the agent, and not just saying: 'oh u just reward it'

  • @malkiwijesinghe-wq3dt
    @malkiwijesinghe-wq3dt 6 місяців тому

    As a person shifting career from data science to bioinformatics, I found this video very helpful & amazingly animated! Hope to see more stuff related to computational biology & AI applications in that

  • @yannchoho9097
    @yannchoho9097 6 місяців тому

    you are great bro

  • @MrWater2
    @MrWater2 7 місяців тому

    Man that's call standarization!! Wtf?? Reparametrization trick???

  • @salmagamal5676
    @salmagamal5676 7 місяців тому

    Incredible work!