![CodeEmporium](/img/default-banner.jpg)
- Видео 339
- Просмотров 7 220 808
CodeEmporium
США
Добавлен 27 май 2016
Everything new and interesting in Machine Learning, Deep Learning, Data Science, & Artificial Intelligence. Hoping to build a community of data science geeks and talk about future tech! Projects demos and more! Subscribe for awesome videos :)
Informer: Time Series Attention Code
In this video, we code the prob sparse attention and compare it to time series attention
ABOUT ME
⭕ Subscribe: ruclips.net/user/CodeEmporium
📚 Medium Blog: medium.com/@dataemporium
💻 Github: github.com/ajhalthor
👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/
RESOURCES
[1] Main repo: github.com/zhouhaoyi/Informer2020/blob/main/models/attn.py
[2] Code for the colab notebook: github.com/ajhalthor/Informer/tree/main
PLAYLISTS FROM MY CHANNEL
⭕ Deep Learning 101: ruclips.net/p/PLTl9hO2Oobd_NwyY_PeSYrYfsvHZnHGPU
⭕ Natural Language Processing 101: ruclips.net/p/PLTl9hO2Oobd_bzXUpzKMKA3liq2kj6LfE
⭕ Reinforcement Learning 101: ruclips.net/p/PLTl9hO2Oobd9kS--NgVz0EPNyEmygV1Ha&si=AuThDZJwG19cgTA8
⭕ Trans...
ABOUT ME
⭕ Subscribe: ruclips.net/user/CodeEmporium
📚 Medium Blog: medium.com/@dataemporium
💻 Github: github.com/ajhalthor
👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/
RESOURCES
[1] Main repo: github.com/zhouhaoyi/Informer2020/blob/main/models/attn.py
[2] Code for the colab notebook: github.com/ajhalthor/Informer/tree/main
PLAYLISTS FROM MY CHANNEL
⭕ Deep Learning 101: ruclips.net/p/PLTl9hO2Oobd_NwyY_PeSYrYfsvHZnHGPU
⭕ Natural Language Processing 101: ruclips.net/p/PLTl9hO2Oobd_bzXUpzKMKA3liq2kj6LfE
⭕ Reinforcement Learning 101: ruclips.net/p/PLTl9hO2Oobd9kS--NgVz0EPNyEmygV1Ha&si=AuThDZJwG19cgTA8
⭕ Trans...
Просмотров: 835
Видео
Informer: Time Series Attention Architecture
Просмотров 1,7 тыс.21 день назад
Here is the architecture of probsparse attention for time series transformers. ABOUT ME ⭕ Subscribe: ruclips.net/user/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Main paper that introduced the Informer: arxiv.org/pdf/2012.07436 PLAYLISTS FROM MY CHANNEL ⭕ Deep Learning 101: ruclips.net/p...
Informer: Time series Transformer - EXPLAINED!
Просмотров 4,4 тыс.28 дней назад
Let's talk about a time series transformer: Informer. ABOUT ME ⭕ Subscribe: ruclips.net/user/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Main paper that introduced the Informer: arxiv.org/pdf/2012.07436 PLAYLISTS FROM MY CHANNEL ⭕ Deep Learning 101: ruclips.net/p/PLTl9hO2Oobd_NwyY_PeSYrY...
Hyper parameters - EXPLAINED!
Просмотров 1,5 тыс.2 месяца назад
Let's talk about hyper parameters and how they are used in neural networks and deep learning! ABOUT ME ⭕ Subscribe: ruclips.net/user/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code for Deep Learning 101 playlist: github.com/ajhalthor/deep-learning-101 PLAYLISTS FROM MY CHANNEL ⭕ Deep Le...
How much training data does a neural network need?
Просмотров 2,5 тыс.3 месяца назад
Let's answer the question: "how much training data does a neural network need"? ABOUT ME ⭕ Subscribe: ruclips.net/user/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code for Deep Learning 101 playlist: github.com/ajhalthor/deep-learning-101 PLAYLISTS FROM MY CHANNEL ⭕ Deep Learning 101: ru...
NLP with Neural Networks | ngram to LLMs
Просмотров 2,4 тыс.3 месяца назад
Let's talk about NLP with neural networks and highlight ngrams to Large Language Models (LLMs) ABOUT ME ⭕ Subscribe: ruclips.net/user/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code for this video: github.com/ajhalthor/deep-learning-101/blob/main/embeddings.ipynb PLAYLISTS FROM MY CHANN...
Transfer Learning - EXPLAINED!
Просмотров 3,5 тыс.4 месяца назад
Let's talk about a neural network concept called transfer learning. We use this in BERT, GPT and the large language models today. ABOUT ME ⭕ Subscribe: ruclips.net/user/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code for this video: github.com/huggingface/notebooks/blob/main/examples/qu...
Embeddings - EXPLAINED!
Просмотров 5 тыс.4 месяца назад
Let's talk about embeddings in neural networks ABOUT ME ⭕ Subscribe: ruclips.net/user/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code for this video: github.com/ajhalthor/deep-learning-101/blob/main/embeddings.ipynb PLAYLISTS FROM MY CHANNEL ⭕ Deep Learning 101: ruclips.net/p/PLTl9hO2Oo...
Batch Normalization in neural networks - EXPLAINED!
Просмотров 2,8 тыс.4 месяца назад
Let's talk batch normalization in neural networks ABOUT ME ⭕ Subscribe: ruclips.net/user/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code for this video: github.com/ajhalthor/deep-learning-101/tree/main [2] Batch Normalization main paper: arxiv.org/pdf/1502.03167.pdf PLAYLISTS FROM MY CH...
Loss functions in Neural Networks - EXPLAINED!
Просмотров 6 тыс.4 месяца назад
Let's talk about Loss Functions in neural networks ABOUT ME ⭕ Subscribe: ruclips.net/user/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code to build first neural network: github.com/ajhalthor/deep-learning-101/tree/main PLAYLISTS FROM MY CHANNEL ⭕ Reinforcement Learning 101: ruclips.net/p...
Optimizers in Neural Networks - EXPLAINED!
Просмотров 2,4 тыс.5 месяцев назад
Let's talk about optimizers in neural networks. ABOUT ME ⭕ Subscribe: ruclips.net/user/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code to build first neural network: github.com/ajhalthor/deep-learning-101/tree/main [2] More details on Activation functions: ruclips.net/video/s-V7gKrsels/...
Activation functions in neural networks
Просмотров 3,1 тыс.5 месяцев назад
Activation functions in neural networks
Backpropagation in Neural Networks - EXPLAINED!
Просмотров 3,3 тыс.5 месяцев назад
Backpropagation in Neural Networks - EXPLAINED!
Building your first Neural Network
Просмотров 4,2 тыс.5 месяцев назад
Building your first Neural Network
Reinforcement Learning through Human Feedback - EXPLAINED! | RLHF
Просмотров 13 тыс.6 месяцев назад
Reinforcement Learning through Human Feedback - EXPLAINED! | RLHF
Proximal Policy Optimization | ChatGPT uses this
Просмотров 12 тыс.6 месяцев назад
Proximal Policy Optimization | ChatGPT uses this
Monte Carlo in Reinforcement Learning
Просмотров 8 тыс.7 месяцев назад
Monte Carlo in Reinforcement Learning
Reinforcement Learning: on-policy vs off-policy algorithms
Просмотров 7 тыс.7 месяцев назад
Reinforcement Learning: on-policy vs off-policy algorithms
Foundation of Q-learning | Temporal Difference Learning explained!
Просмотров 12 тыс.7 месяцев назад
Foundation of Q-learning | Temporal Difference Learning explained!
How to solve problems with Reinforcement Learning | Markov Decision Process
Просмотров 10 тыс.8 месяцев назад
How to solve problems with Reinforcement Learning | Markov Decision Process
Multi Armed Bandits - Reinforcement Learning Explained!
Просмотров 7 тыс.8 месяцев назад
Multi Armed Bandits - Reinforcement Learning Explained!
Elements of Reinforcement Learning
Просмотров 8 тыс.8 месяцев назад
Elements of Reinforcement Learning
[ 100k Special ] Transformers: Zero to Hero
Просмотров 39 тыс.9 месяцев назад
[ 100k Special ] Transformers: Zero to Hero
20 papers to master Language modeling?
Просмотров 8 тыс.9 месяцев назад
20 papers to master Language modeling?
How AI (like ChatGPT) understands word sequences.
Просмотров 3 тыс.10 месяцев назад
How AI (like ChatGPT) understands word sequences.
Hi @CodeEmpoium, Thank you very much for the video. It is very useful and well done. I just have one clarification. When you draw the Gaussian distribution corresponding to the point Xi (01:10-01:30), it seems that you make the peak of the curve coincide with the intersection of the interpolating line. Usually, it doesn’t happen this way; the peak is at the intersection of the vertical line passing through Xi and the interpolating line. As shown in this image: towardsdatascience.com/probabilistic-interpretation-of-linear-regression-clearly-explained-d3b9ba26823b Is that correct?
absolute banger! well done
Could not get anything from this. too complex
How is it 20:1 for alexnet 👀
great explanation.
great video thanks question ... in the third question ... how do sample subset of keys, queries "depending on importance"
Good explanation!
lmk if they say
the best video so far
you're a clear, calm explainer.
Nice explanation
well explained , you made it look really easy !
what book is that
Vsauce plugin 😂
1-B 2-ABCD 3-C are these answers correct?
1-CD 2-D 3-D
1 - B, 2 - B, 3 - C. Are these answers correct?
That's not Pong.
1 - A 2 - ABC 3 - A Are these answers correct?
(1) supervised fine-tuning (SFT), (2) reward model (RM) training, and (3) reinforcement learning via proximal policy optimization (PPO) on this reward model explain me
Great work ❤
I have seen so many transformers videos but this one is outstanding, I also want to request you to make a video on vision transformers too❤
Nice video, well explained. Question, why would I use one or the other? Are there advantages or disadvantages?
Is it your voice or what ai you use ?
what does x and y represent in the graph you use to show the cats and dog points ?
thank u so much
Really Appreciate Your Efforts. Love from Gujarat India.
The best explanations on transformers that i have seen!
according to dr. c. k. raju calculus was stolen from india
You are my new favorite channel
The video is informative and good. but stop saying quiz time in an annoying way
Super clear, thanks!
1997
Easy
1. either of MSE or MAE as both are used as loss functions for regression. 2.A,B,C 3.A are these answers correct?
Valeu!
Thanks so much for the donation! Glad you liked this content!
it was a great tutorial will be back soon
Thanks for a very informative explanation. This seems like a bit of a step up in complexity from earlier videos, so I suspect some viewers of earlier ones might not make it to the end of this one. I think the Quiz answers are A B B. Presumably this Probsparse approach is useful in other situations (image processing springs to mind) as well as time sequences.
you are the best on the youtube for the transformer all the best
Hey I am not able to access the links you have provided under Math courses and other related courses
Hey I am not able to access the Math courses link mentioned in description. Could you please update those links
@4:38 are you sure d_q is the number of total time steps? I think it's supposed to be the dimension of the query & key.
In the final video are you going show an example when you feed data into the model and the interpret the output. It would be good to see any prepressing of the data to get it in the right format to feed into the model. I'm keen to use this model for a timeseries forecasting exercise 8 timesteps ahead.
Can u tell an interactive model of AI neural network for school project.. And ur videos are nice and I understand easily.. Pls tell
All this hindu Indian scientists where really selfie to keep there studies and research to a certain group And when someone from West discover out the completely same thing like after 200 or 300 years they started saying "no we discovered it hundreds of years ago " . So why don't you spread that knowledge Because of people like this the your science knowledge and this so called modern world is 100 years back in time
Very well explained
Why this is so Underrated, this should be on every one playlist for linear regression. Hatsoff man :)
wonderful explanation!!!
C
B