Still no participant
Still no reviews
"Deep Learning for Image Analytics" is a comprehensive course designed to explore the transformative
impact of deep learning on the field of image processing and computer vision. This course delves into the
fundamentals of neural networks, including convolutional and recurrent architectures, and their
applications in various image analytics tasks. Students will gain hands-on experience with state-of-the-art
deep learning frameworks like TensorFlow and PyTorch, learning to implement, train, and optimize
models for tasks such as image classification, object detection, and image segmentation. By bridging
theory with practical implementation, the course aims to provide a robust understanding of both the
technical and applied aspects of deep learning in image analytics. Additionally, the curriculum covers
advanced topics like transfer learning and generative adversarial networks (GANs), preparing students to
tackle real-world challenges and stay abreast of current trends and innovations in this rapidly evolving
field.
The objectives of the course are
• Understand deep learning fundamentals and CNN architectures.
• Develop and evaluate CNNs for image classification and object detection.
• Implement generative models and deploy deep learning applications using modern
frameworks.
After completion of the course students will be able to:
• CO1: Explain the basic concepts of neural networks, including activation functions and
backpropagation. (Understand, Remember)
• CO2: Design and implement CNN architectures for various image analytics tasks. (Apply, Create)
• CO3: Utilize advanced CNN architectures and transfer learning for enhanced performance. (Apply,
Analyze)
• CO4: Develop and evaluate models for image classification, object detection, and segmentation.
(Apply, Evaluate)
• CO5: Implement generative models and deploy deep learning applications using TensorFlow,
Keras, and PyTorch. (Apply, Create)
(but not limited to)
Study Material
https://drive.google.com/file/d/1qHsiOa-HvpA-xZSuE5BVEuDGtdHHCVFD/view?usp=sharing
Manual
https://drive.google.com/file/d/1IgFeQ9yUoBzZmUVbJYfzMA3EMyg2XRU_/view?usp=sharing
Study Material
https://drive.google.com/file/d/1WQsD5qUvV06spKGBxUQzK2wZbQNSAHFc/view?usp=sharing
Manual
https://drive.google.com/file/d/1dfGU-P3-B9UDLKNP0dQpHf-Q0QmxNmKi/view?usp=sharing
Study Material
https://drive.google.com/file/d/14LruO8t71F2KgPJn8MVrxK88NNmcwWOR/view?usp=sharing
Manual
https://drive.google.com/file/d/1dSLeMV_YWGsq7JuNMbtxENXS9waGbj0l/view?usp=sharing
Experiment 4
Building a Basic CNN: Implement a CNN architecture with convolutional, pooling, and fully connected layers in Python on the MNIST handwritten digit dataset to classify digits.
Manual
https://drive.google.com/file/d/116SFIBl63dJJzN7vHuf_MvX2N_S-Q0iE/view?usp=sharing
Experiment 5
Visualize Filter Kernels: Visualize the learned filters (kernels) from the convolutional layers to understand how they extract features.;
Manual
https://drive.google.com/file/d/1sswGZSZeqe-4IDl15tjxMqJfR7lORvhV/view?usp=sharing
Experiment 6
Experiment with Pooling: Compare the impact of different pooling types (Max Pooling, Average Pooling) on the performance of a CNN for image classification.;
Manual
https://drive.google.com/file/d/1oXaJFGkOy_fcaPXIKBz47KD8NddYBHmf/view?usp=sharing
Experiment 7
Fine-tuning a Pre-trained CNN: Load a pre-trained CNN model (AlexNet, VGG, ResNet, Inception) and fine-tune it on a smaller dataset to classify a different set of objects.;
Manual
https://drive.google.com/file/d/1gmSTw3DTRuJWnXyJq3ikG-sr3dGaadz8/view?usp=sharing
Experiment 8
Implement Residual Connections: Explore building a simple Resnet block in Python and observe how it addresses the vanishing gradient problem.;
Manual
https://drive.google.com/file/d/1VHM1C3-0xXqpNGM_k3ZbbF02E3skq7f2/view?usp=sharing
Experiment 9
Pretrained Models: Fine-tuning a Pre-trained CNN for Image Classification: Utilize a pre-trained CNN model to classify a new set of images with a smaller dataset.;
Manual
https://drive.google.com/file/d/1gmSTw3DTRuJWnXyJq3ikG-sr3dGaadz8/view?usp=sharing
Experiment 10
Transfer Learning: Compare training a model from scratch vs. fine-tuning a pre-trained model on a new classification task with a smaller dataset.;
Manual
https://docs.google.com/document/d/1h08FfqNSYo3P4frBnNjKt-XViDXFlYRs/edit?usp=sharing&ouid=113639235384318197160&rtpof=true&sd=true
Experiment 11
Data Augmentation Techniques: Implement data augmentation techniques (random cropping, flipping, color jittering) in Python and observe their impact on model performance.;
Manual
https://drive.google.com/file/d/1Gg_aQz0UdYWVpP9exKaEI82syfEb261Y/view?usp=sharing
Experiment 12
Train-Validation Split: Conduct experiments with different train-validation split ratios to find an optimal balance for model training and evaluation.;
Manual
https://drive.google.com/file/d/1rDnQ0jeDW2t7el9_SkSM5wtBXrqJYbJF/view?usp=sharing
Experiment 13
Monitor Training Progress: Visualize training and validation loss curves using libraries like Matplotlib to track model learning and identify potential overfitting or underfitting.;
Manual
Experiment 14
Implement Non-Maxima Suppression (NMS): Develop a Python function for NMS, a technique used in object detection algorithms like R-CNN to remove redundant bounding boxes.;
Manual
Experiment 15
Explore YOLO Object Detection: Implement a basic YOLO model in Python and compare its performance with R-CNN variants on a suitable object detection dataset (e.g., COCO).;
Manual
https://drive.google.com/file/d/1l4BlK17F-8AK9GStQYj6Le3pWQ-wiQf-/view?usp=sharing
Experiment 16
Semantic Segmentation with U-Net: Implement a U-Net architecture in Python for semantic segmentation tasks like pixel-wise classification (e.g., medical image segmentation).;
Manual
Experiment 17
Visualize Segmentation Masks: Create code to visualize the output segmentation masks produced by your trained segmentation model to assess its accuracy.;
Manual
Experiment 18
Train a Simple GAN: Implement a basic Generative Adversarial Network (GAN) in Python to generate synthetic images from a noise distribution (e.g., MNIST digits).;
Manual
Experiment 19
Explore Conditional GANs: Investigate implementing a Conditional GAN that can generate images conditioned on additional input (e.g., generating images of specific object categories).;
Manual
Experiment 20
Compare TensorFlow vs. Keras: Experiment with building the same model (e.g., a CNN) using both TensorFlow and Keras, exploring their high-level vs. low-level API differences;
Manual
Dr. Abhishek Das is presently working as an Assistant Professor in the Department of Computer Science & Engineering, Centurion University of Technology & Management, Paralakhemundi, Odisha, India. He has completed his B.Tech degree from Biju Patnaik University of Technology and M.Tech degree from Siksha ‘O’ Anusandhan (Deemed to be University), Bhubaneswar, Odisha, India, in […]