You’ve run your first torch demo, and acquainted yourself with the main actors (tensors, modules, optimizers)? Then you’re ready to dive into applied examples. The list of examples keeps growing as the ecosystem evolves. What area are you interested in?
A thorough introduction to the why and how of image processing with deep learning is found in our book, Deep Learning and Scientific Computing with R torch.
Bird classification is a multi-class classification task. In addition to being a blueprint for doing classification with torch, this introductory example shows how to load data, make use of pre-trained models, and benefit from learning rate schedulers.
Brain image segmentation builds a U-Net from scratch. This intermediate-level example is a great introduction to building your own modules, as well as custom datasets that perform data preprocessing and data augmentation for computer vision.
An interesting use case that illustrates the importance of domain knowledge is discussed the
Labeling poisonous mushrooms is a first introduction to handling a mix of numerical and categorical data, using embedding modules for the latter. It also provides a blueprint for creating torch models from scratch.
torch, tidymodels, and high-energy physics introduces
tabnet, a torch implementation of “TabNet: Attentive Interpretable Tabular Learning” that is fully integrated with the
tidymodelsframework. Thanks to
tidymodelsintegration, both pre-processing and hyperparameter tuning need a minimal amount of code.
Time series forecasting
The general ideas behind time-series prediction with deep learning are discussed in-depth in the book, Deep Learning and Scientific Computing with R
Introductory time-series forecasting with torch is a thorough introduction to RNNs (GRUs/LSTMs), explaining usage and terminology.
torch time series continued: A first go at multi-step prediction builds on this, and widens to the scope to multi-step-prediction.
torch time series, take three: Sequence-to-sequence prediction and torch time series, final episode: Attention expand on the prior two articles, introducing more advanced concepts like sequence-to-sequence processing and attention.
Convolutional LSTM for spatial forecasting is an intermediate-level example that shows how to build a convolutional LSTM from scratch.
- In its chapter on audio classification, the
torchbook shows, by example, the usefulness of integrating Fourier-domain representations with deep learning.