Time to Nurture Your Inner Maths
There is nothing I love more than digging into a new problem area and learning something new. Yes, it can be frustrating at times uncovering the usual tricks and traps, working through sample problems and suboptimal documentation which may or may not be well written and debugged. The incredible feeling when you finally hit a breakthrough, the payoff for all those hours of intense study and persistence is wonderful.
What many of us do not easily admit, is the sense of intimidation we might feel by AI and machine learning, what level of maths and statistics do we need to master first, just to read an article or try out the latest framework? This can indeed become an incredible hurdle which prevents us from diving in. The sense of “math shame” is further promoted by the widely-held industry bias that one has to hold an advanced degree in mathematics and data science in order to participate fully. I am here to say, this is just not so.
Don’t get me wrong here — those advanced degrees are extremely important and valued — but there are many paths to learning and mastering new areas of knowledge. The key is to identify a suitable problem you want to solve, identify the requisite skills & expertise you will need, and use that to focus your explorations. To state the obvious — we all learn differently and at a different pace, and there is not one single way to approach it. If you try a particular source and you don’t seem to follow what they are talking about, try another source. Don’t give up and don’t assume it is some personal failing. Many of us are brilliant but are terrible communicators; the two do not always go hand in hand.
Developing successful AI/ML projects takes a diverse set of perspectives and expertise, as you will see reflected in the Perspicace service offerings. One thing which is clear — we need to brush off those cobwebs and nurture our inner maths — linear algebra, a bit of calculus, and statistics. It won’t do to adopt the latest AI framework if you don’t understand how the model is working under the covers. The trick is to not be intimidated, find a good source, dig in and spend some time with it; you will realize an amazing sense of accomplishment when you do.
What follows are some exceptional resources to get started in AI, machine and deep learning:
3Blue1Brown
I think Grant Sanderson is a genius, a master of creative visualization. When it comes to articulating challenging math (and physics,…) concepts and illustrating alternative perspectives to inform problem solving, I have never seen anything quite like his work. He has several series on linear algebra, calculus, neural networks, including two devoted to backpropagation, the intuition and the underlying calculus. I highly recommend starting with Grant first, before diving into any AI/ML example, either to refresh or instill a solid foundation of the theory behind the approach. After you gain the visual understanding, proceed to some excellent documented sample code to reinforce the concepts.
It may take going back and forth a bit — between the visual depiction, the mathematics discussions, and some hands-on work with the code for it to settle in properly. Personally, I need to vectorize everything before it solidifies; I have to really fight through the prevalent use of sloppy loops. I’ve frequently had to rewrite entire course exercises in order to make sense of them and get the correct results. I find it more natural (trained first in C/C++ pointers and vectors); you might find the reverse to be true — the loops may be intuitive while vectorization eludes you. Again, don’t give up… you ultimately want to be able to go back and forth to understand the concepts, the inner workings of the models. The frameworks can be great time-savers but if you don’t understand what is happening inside, you can readily misapply it and go seriously wrong.
Andrew Ng
Stanford Machine Learning (Coursera one course; YouTube series)
deeplearning.ai DL Specialization (Coursera 5 courses)
Andrew’s approach to AI and ML is refreshingly accessible. I highly recommend the Stanford ML course for anyone who wants a deeper understanding of the theories and models, even before you dive into the deeplearning.ai DL specialization. I recommend all six courses because the overlap is quite small and they employ different programming languages, environments and frameworks, so great exposure to the real-world mix of tools being used in the industry today (it matters less which tools and languages specifically as much as you learn to rapidly switch and learn the latest). Working at a leisurely pace, Coursera advises the Stanford ML course takes ~11 weeks, while the deeplearning.ai DL specialization takes ~16 weeks; your mileage will vary.
The Stanford ML course goes deeper into the fundamentals and takes time to cover the math. It also spends time teaching you the importance of vectorization. In particular, Andrew takes the time in the ML course to teach you the math symbols, their significance and application, preparing you to understand mathematical notation in scientific papers and Jupyter notebooks alike. The DL specialization moves very quickly amongst the different models without nearly as much theory, generally assumes you already know (or don’t care about) the math.
The DL specialization does a great job of covering important best practices in structuring projects, establishing metrics, hyperparameter tuning, and risk mitigation strategies. It also covers the newer, more prevalent frameworks such as TensorFlow. If you have a solid math background, of course you could simply take selected DL courses and grab the occasional video on YouTube for a quick refresh.
While I found the programming exercises in all six courses very informative, I think the ML course assignments have been better curated, and there was better alignment between the video coverage, the quizzes, and the assignments. The DL specialization programming assignments had a large number of errors, numerous Jupyterhub environment issues, and many unanswered complex issues raised in the forums; bless the mentors and student community for their due diligence and persistence in posting fixes. Despite these minor nuisances and frustrations, you should absolutely forge through each and every exercise to complete your understanding; they are invaluable. I promise you, with the proper groundwork laid by these resources, you will be able to follow along with the latest research articles, and continue your advanced training working hands-on with the latest algorithms.
Have you found an excellent AI/ML course or reference you would like to pass along? Let us know and we’ll feature in upcoming blogs.
We will be setting up the Perspicace GitHub in the coming months and plan to post working examples of different models and use cases. Let us know if there are particular models that you would like to see featured.