A Discussion with Laurent Najman on
Deep Learning and Interdisciplinary
Research in the 21st Century

30th December 2017

Dr. Laurent Najman is a professor of computer science at the University of Paris and Laboratoire d'Informatique Gaspard-Monge. Throughout his career, he has worked on the applications of image processing and computer vision in varous areas including military, medicine, and astronomy. Dr. Najman has published over 200 scholarly articles, including the book book Mathematical Morphology, which he co-authored with Hugues Talbot. He was a keynote speaker at the International Conference on Advances in Pattern Recognition (ICAPR), 2017 held in the Indian Statistical Institute, Bangalore. Our junior researcher Suryoday Basak was able to catch up with him and ask him a few questions on the advent of deep learning and how data science will eventually come to be an omnipresent driving force for emerging areas of science.

Suryoday Basak: You have worked on a lot of interdisciplinary projects, ranging from image processing in medicine, military, and even astrophysics. What do you see as the future of data science? And what impact does deep learning have on the way things are done today?

Laurent Najman: Deep learning is really shifting what we're doing today in computer vision at large. There has been a 'before' and there will be an 'after' — there is a change in the way we're doing things today. We can't ignore computational or machine learning in general. What I think is that, progressively, machine or computational learning will be just a commodity. I don't believe that everything can be solved with the techniques we have today but one problem that deep learning solves is the computation or extraction of features. This is a very important problem by itself. We need to analyze features in data anyway, and for this, most of the things that we are doing today in computational learning, in general, can be used to analyze features. I think this is something very generic, I mean it's true not only for computer vision but true for many fields. For example, in the biggest conference for radiologists, held in early last June, one of the keynote speeches was on how the radiologists as we know them today are going to disappear. And why? This is because today, the physician, let's say, a cardiologist, doesn't need the opinion of a radiologist on images; as the technologies are maturing, any cardiologist is able to read and understand the medical images by himself or herself. There's no need for the opinion of anyone else! What the radiologists can do now is to expand their knowledge and to do the 'science' — to analyze not only the image data but any sort of data that we have procured from patients and then maybe they can form an informed opinion on something new. This is something that the practising cardiologist will never do because she will never have time to do that. So data science will progressively invade every aspect of our lives [laughs]! But it's not only computational learning but also visualizing, analyzing, and a lot of other things with the data that we have to do. I can't really imagine what the possibilities are — they are numerous! So my opinion is that we need to know a few things about deep learning. We definitely need to understand what the technology is doing. At the moment, we don't know. We just know it works, but that is not enough. It's not a science if it works and we don't know why. And more importantly, we don't know when it fails! We need to know when it fails. At the same time, we will always have a need for the basic techniques because they can be used on top of deep learning.

Suryoday Basak: So deep learning actually hides a lot of details which are otherwise fed by scientists who have worked on computer vision. Does deep learning really automate the process of classification without prior feature extraction?

Laurent Najman: At this stage of research, we don't know what deep learning is doing. No one knows, but people are trying to understand. You can look at a feature provided by any type of deep learning algorithm. You can look at them and you can see that at low levels, they are definitely very similar to wavelets and similar stuff, something that a human can engineer. Humans can find low-level features but here they are detected automatically. So they may be a little better; I don't know, but maybe you don't need to care. At the higher levels of deep learning machinery, we don't understand exactly what happens. It's not possible at this stage. There's no explanation of how they combine and what they are doing in the end. So deep learning is a way to transform data from one space to another space where it's easier to classify. And it does the job very well! But for the moment, there's no theory that allows us to explain the features that are provided by deep neural networks and this is something that we have to work on.

Suryoday Basak: That really provides a clearer perspective on why deep learning is taking over. I suppose that if you don't need to really worry about feature extraction, a lot of your work is taken care of! My last question for you is from a different viewpoint of your work. As you've worked on various interdisciplinary projects, could you elaborate on the attitude of scientists from domains other than computer science towards the advent of machine learning and deep learning? Working on astroinformatics, it's often observed that there's an inherent cultural difference between physicists and computer scientists, where the former might be bent on using one set of, say, four to five features for an elaborate analysis, whereas the latter might make use of all features to do an object classification. In your experience, how did you overcome such barriers of discipline and mindset?

Laurent Najman: It is usually difficult to work with people from other domains at first because they might not have any knowledge of what we are doing in our field, and there is a first stage where we have to understand each other. It is very important to have an understanding. For example, in one of my projects, we had to do a tumor segmentation and measurement from images. At the beginning of the project, there were ups and downs in our collaboration with the physicians because at some level, they did not understand what we're doing. But at the end when we presented the final project, they were totally baffled! So now we have a very good salesman who is a doctor and he's totally bought the idea! He's also selling the product, for free [laughs]! He's sharing it with anyone who collaborates with us and wants it to become 'the' software in the field to standardize the research in the area of tumor segmentation.

Indeed, it is difficult to communicate if you don't understand each other. This requires human qualities, tenacity, and also showing a lot of examples demonstrating that your technology works. Trust is an important matter when you start a project. Eventually, people can understand what's going on. Taking the example that you mentioned: okay, someone is working with a few features, but this is because they think that these features are important. If you can do some kind of statistical analysis to find what features are more important, I think you can show the effectiveness of your methods.

Suryoday Basak with Dr. Laurent Najman