Inspiring Lab Pvt. Ltd. successfully organized our first Artificial Intelligence(AI) Workshop for beginners on November 23, 2019. The event was attended by 20 short listed AI enthusiasts who wanted to begin taking their first steps in building a career in this field. The event started off at 10:30 am with an opening statement from the CEO of Inspiring Lab Pvt. Ltd., Mr. Rajeev Bista.
He gave a brief introduction about Inspiring Lab and mentioned some key projects the company was working on currently that produced Artificial Intelligence solutions to common day to day problems. He went on to express the motivation behind starting an education program for young beginners by the company.
The learning session was then kicked off by one of the instructors, Mr. Pranav Sharma. He started with a basic introduction to artificial intelligence and its numerous applications in day to day life. He went on to discuss and explain some of the state-of-the-art AI tools and techniques being developed and implemented in the world today along with discussions on some of the key concepts required to formally define and describe AI systems. He then entered the realm of Machine Learning and the core concepts associated with it which was eagerly awaited by all the participants. He gave an in-depth idea of the types of machine learning algorithms in use today, i.e. supervised learning, unsupervised learning, and reinforcement learning, with suitable examples and analogies wherever necessary. In supervised learning, he went onto shed light on the types of problems that AI developers normally tackle. Firstly, he described and explained common regression problems and their potential solutions. He explained fairly simple algorithms like linear regression and went onto introduce more complex regression techniques like polynomial regression as well as regularized regression techniques and explain what type of problems would require such algorithms to be used. He then moved onto talk about classification problems and where they may be implemented along with explanations on some of the most prominent classification algorithms in use today like logistic regression, decision tree classification and support vector machines and the distinctive scenarios under which we would use said algorithms. In unsupervised learning, he first discussed the basic concepts involved and how it is different compared to supervised learning methods. Similar to previous topics, he then went on to introduce and explain some simple yet powerful algorithms that fall under unsupervised learning, e.g. the K-means algorithm and Single Value Decomposition (SVD). He also introduced some ideas of reinforcement learning and some common implementations of reinforcement learning methods. The machine learning session was then ended off with a demonstration of implementing classification algorithms on the Iris flowers dataset and common techniques used to test and evaluate learning algorithms.
After a short tea break, the next session on the essentials of deep learning was started by our second instructor, Mr. Sagun Kayastha. He delivered an in-depth idea behind deep learning and explained the significance behind the layered architecture of deep learning systems. He discussed the history and important techniques devised through the years in deep learning and briefly discussed the current state-of-the-art techniques in deep learning. He then dived into an in-depth analysis of artificial neural networks and explained the core computations involved in constructing neural networks along with describing intuitive ideas on what the computations were performing in the network. He then showed how “learning” was done in deep neural networks and explained the theory behind back-propagation in neural networks and how parameters were learned by a network. He then explained key ideas behind learning in neural networks like loss functions and error minimization which led to the explanation of the Gradient Descent Algorithm. The session on Deep learning was then summed up with an interactive demonstration of how nodes and layers work to learn complex functions that map datasets of wide variations using the Tensorflow Playground as well as a brief code walkthrough on how to build neural networks using advanced libraries like Keras and TensorFlow along with code on how to build a neural network without using any libraries from scratch.
He then moved onto the application of deep learning methods in Computer vision and image processing. Firstly, the key ideas behind how images were stored and operated upon by a computer were discussed and why computer vision is considered one of the most difficult disciplines to undertake in the world of Artificial Intelligence and Machine Learning. He then discussed some simple operations done in image pre-processing that did not involve deep learning methods like simple edge detection, blurring/deblurring, etc. and how they were not enough to perform the many complex tasks required in the computer vision problems. He then introduced Convolutional neural network (ConvNets or CNNs) and went on to explain why it was such a revolutionary technique in the world of computer vision compared to traditional neural networks. The core computations involved in convolution, activation, and pooling operations were then explained in great detail with another interactive demonstration of how filters changed target images to various types of activation maps. The most prominent applications of computer vision like object detection and recognition, motion analysis, facial recognition, etc were shown and discussed briefly. Finally, the session on computer vision was ended with a live demonstration of object detection and recognition by simply using a camera and some raw code.
After a second short break, the instructor moved onto introducing natural language processing. He firstly, explained how language and sequential data were significantly different compared to other datasets seen so far and how tasks like data pre-processing and data evaluation were crucial to learning tasks where language or sequential data processing was involved. A number of notable natural language pre-processing techniques like tokenization, stemming, lemmatization were shown and discussed. Furthermore, it was shown how pre-processing techniques when working with language data was also highly specific according to the learning task being implemented (i.e. how in sentiment analysis we would not be interested in certain types of information like people’s names, addresses, etc. but in tasks involving statistical inference those parts would be highly important and could not be omitted). Then, the instructor went onto describe and explain deep learning techniques using Recurrent Neural Networks in NLP. The what, how and why of the internal structure of neural networks were explained in great detail. Finally, the session for natural language processing ended with a code demonstration of performing simple word classification in python.
The entire workshop session then came to a conclusion with a final address from the executive members of the Inspiring Lab team and a small feedback session on the overall impressions of the workshop session. On a more informal note, we would like to thank each and every person who was present in the workshop session with us. Thank you for giving us your time and attention for an entire team and hope that the session was greatly informative and inspiring. We are also grateful for the great feedback that each and everyone gave us and promise to work rigorously to take improvement measures to make our session better wherever possible.