As you can imagine, that’s a lot of books and references, so we have voted for those most relevant to us. This year we have rediscovered a magnificient companion in books, just when we have had to limit and rethink the very concept of the company due to the pandemic.
In this article we reflect the most recommended books. There are books on statistics, mathematics, Machine Learning, Deep Learning or visualization, but also books that focus on the dilemmas posed by Artificial Intelligence and others on team and business management. Finally, we have chosen to make two clusters among all the titles presented below: on the one hand, books with a more academic or technical character; on the other hand, dissemination readings that allow us to better understand some aspects of this exciting world of Data and Artificial Intelligence.
Deep Learning (Adaptative Computation and Machine Learning Series). Ian Goodfellow, Yoshua Bengio, Aaron Courville
Reference text that comprehensively reconstructs Deep Learning domain from basic principles. It is divided into 3 parts: “Applied Maths and Machine Learning Basics”, which reviews the mathematics and necessary concepts to tackle the subject; “Deep Networks: Modern Practices”, where the ideas and applications of Deep Learning are presented, and “Deep Learning Research”, where a variety of more advanced ideas and concepts are reviewed. Although it is a theoretical book, it is relatively easy to read, even enjoyable in the first two parts. It shines as reference material or to reinforce concepts learned in more practical texts such as “Deep Learning with Python” by F. Chollet.
Pattern Recognition and Machine Learning. Christopher Bishop
Academic text that introduces the key areas of Machine Learning, from probability and linear models to graphical models, neural networks or latent models. Its originality stands out in the way it explains things: the book begins by explaining a very basic topic (linear models), but takes the opportunity to illustrate all the key concepts (overfitting, Bayesian inference, regularisation, model selection). This helps the reader to grasp the “essence” of Machine Learning very early on and then makes it easier to understand the more complicated areas.
The book is a bit old and does not cover certain new consolidated advances (Deep Learning or causality); but in any case it is still relevant – after all, the basics have not changed that much -. It was also written at a time when there were not so many practical resources, so the book remains at the level of mathematical explanations and graphical examples (both of which are excellent).
The Elements of Statistical Learning. Trevor Hastie, Robert Tibshirani and JH Friedman.
This is one of the reference books on Machine Learning that has become a classic over time. It was first published in 2001, before Data Science and Machine Learning became popular. Written by Trevor Hastie, Robert Tibshirani, and Jerome Friedman, three statisticians from Stanford University, the book is an exquisite journey through the world of Machine Learning with a mathematical and statistical approach, without abusing theory and always accompanied by didactic examples written with ease, as well as very useful and elegant graphics. The book covers topics that sound familiar today but are essential for practitioners of predictive modelling, such as: regression and classification methods, regularisation techniques, kernel-based methods, ensemble-based methods, model selection and evaluation or dimension reduction techniques, among others.
Networks. An Introduction. M.E.J. Newman.
The best reference for getting into and even going quite deep into the world of complex networks or graph analytics. It is a dense book that starts with the most basic theoretical/mathematical definitions of the fundamentals of networks. Very detailed on network metrics and structures, similarity algorithms, path finding or propagation algorithms, among others. It is not very practical – there is no code – but the language and examples are very understandable if you have a basic knowledge of linear and matrix algebra. One of its sections also deals with the computational cost of implementing these algorithms in practice.
Moreover, it is very useful to understand how to model problems with this kind of techniques for social networks, computer networks or epidemics. On the latter, which is unfortunately very topical due to COVID-19, there is an entire chapter “epidemics on networks” dedicated to algorithms for modelling contagion (SIR, SIS, SIRS, etc.). This is included in the last part of the book, within all the processes in networks, time evolution of signals and their dynamics. I think this is something difficult to find in the literature and it is very well addressed here.
Machine Learning. A Probabilistic Perspective. Kevin P. Murphy.
A good reference book to delve into all aspects related to machine learning that has become a classic since its first edition in 2012.
The author does an excellent job of presenting the different fields that have contributed to the advancement of the area, from the most statistical approaches (linear regression, logistics, model selection, graphical models) to the most computational (decision trees, kernels, neural networks), seeking a Bayesian probabilistic nexus that is at the same time pragmatic. In addition, many of the techniques are linked to applications in the field of vision, natural language processing or networks.
In the coming months, the second edition is renewed and expanded to cover both the fundamentals and the latest developments especially in the field of Deep Network Learning. Even better, the code for examples and figures will be available in Python, complementing the book’s rigorous mathematical presentation.
Weapons of Math Destruction. Cathy O’Neil.
In this book, Cathy O’Neil presents and highlights, in her own words, that “big data increases inequality and threatens democracy”. Despite this phrase, perhaps a little sensationalist, the author tells us through several real examples and personal experiences how many decisions that affect us on a daily basis are taken by algorithms rather than by humans (Weapons of Mathematical Destruction, as she calls them). And even if they are mathematically based, this does not imply that they are fair and neutral, but that they may include biases – for example due to the input data they use or the cultural background of the programmer – and fail to take ethical and moral concerns into account.
Although it is an entertaining and well-written book (as an example, the simile of food preparation to explain what a model is), I would recommend it to an audience that already has some prior knowledge of Artificial Intelligence.
Despite being generally a critical book about algorithms, in the last part it provides a proposed solution to the problem – I’m not going to spoil it for you, you’ll be able to discover it by reading it ;).
How Charts Lie, Alberto Cairo.
Social media has made graphs, infographics and charts ubiquitous and easier to share than ever before. While these visualisations can inform us better, they can also mislead by showing incomplete or inaccurate data, suggesting misleading patterns, or misinform by being poorly designed.
Many of us are not sufficiently prepared to interpret the images that politicians, journalists, advertisers and even employers present every day, allowing bad actors to easily manipulate images to promote their own agendas. Public conversations are increasingly driven by data and, to make sense of them, we must be able to decode and use visual information. By examining contemporary examples ranging from infographics of election results to maps of global GDP and graphs of box office records, How Charts Lie teaches us how to do just that.
The Book Of Why: The New Science of Cause and Effect. Pearl, J. and Mackenzie, D.
This book establishes a connection between what we understand by Data Science and what scientific discovery really is (correlation versus causation). To do so, it relies on different metaphors, such as “the ladder of causality”, which I think are quite intuitive for the less experienced reader.
The book traces the history of the struggle between correlation and causation from various points of view (philosophical, moral and mathematical). It uses several examples such as the struggle within the scientific community to establish tobacco as a cause of cancer.
Towards the end of the book, a mathematical framework is established, based on big data and traditional statistics, with which to work more formally with questions of causality. The book is written by Turing Award winner and inventor of Bayesian networks, Judea Pearl.
Creativity, Inc., by Ed Catmull
Written by Ed Catmull (founder of Pixar and one of the fathers of computer graphics), it narrates various periods of his life. From his academic days at the University of Utah to the recent history of Pixar, through a transition from academic to private research.
This book contains lessons on managing teams and creating cultures that foster creativity, based on his experience of managing in environments with a strong need for experimentation, sharing new ideas and managing uncertainty.