Introduction to this Class: Wednesday, 16th March 2016, 17:00 – 19:30 Seminarraum Argentinierstrasse
Lecture Room after 16th March > SemR DA gruen 06B (link to TISS)
You find assignments and tutorial material via Github:
This graduate course follows a research-based teaching (RBT) approach and discusses experimental methods for combining human intelligence with machine learning to solve problems from health informatics. For practical applications we focus on Python – which is to date the worldwide most used ML-language.
Motto of the Holzinger-Group: Science is to test crazy ideas – Engineering is to put these ideas into Business.
Machine learning is a highly practical field, consequently this class is a VU: there will be a written exam at the end of the course, and during the course the students will solve related assignments.
Machine learning (ML) is the most growing field in computer science (Jordan & Mitchell, 2015. Machine learning: Trends, perspectives, and prospects. Science, 349, (6245), 255-260), and it is well accepted that health informatics is amongst the greatest challenges (LeCun, Bengio, & Hinton, 2015. Deep learning. Nature, 521, (7553), 436-444).
Business related issues:
McKinsey: An executive’s guide to machine learning
NY Times: The Race Is On to Control Artificial Intelligence, and Tech’s Future
Economist: Million-dollar babies
Employability for University Graduates:
“Fei-Fei Li, a Stanford University professor who is an expert in computer vision, said one of her Ph.D. candidates had an offer for a job paying more than $1 million a year, and that was only one of four from big and small companies.”
Market Opportunity for Spin-offs:
“By 2020, the market for machine learning applications will reach $40 billion, IDC, a market research firm, estimates.
By 2018, IDC predicts, at least 50 percent of developers will include A.I. features in what they create.”
The goal of ML is to develop algorithms which can learn and improve over time and can be used for predictions. In automatic Machine learning (aML), great advances have been made, e.g., in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches, e.g. deep learning, greatly benefit from big data with many training sets. In the health domain, sometimes we are confronted with a small number of data sets or rare events, where aML-approaches suffer of insufficient training samples. Here interactive Machine Learning (iML) may be of help, having its roots in Reinforcement Learning (RL), Preference Learning (PL) and Active Learning (AL). The term iML can be defined as algorithms that can interact with agents and can optimize their learning behaviour through these interactions, where the agents can also be human. This human-in-the-loop can be beneficial in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization, where human expertise can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard problem reduces greatly in complexity through the input and the assistance of a human agent involved in the learning phase. However, although humans are excellent at pattern recognition in dimensions of ≤3; most biomedical data sets are in dimensions much higher than 3, making manual data analysis very hard. Successful application of machine learning in health informatics requires to consider the whole pipeline from data preprocessing to data visualization. Consequently, this course fosters the HCI-KDD approach, which encompasses a synergistic combination of methods from two areas to unravel such challenges: Human-Computer Interaction (HCI) and Knowledge Discovery/Data Mining (KDD), with the goal of supporting human intelligence with machine learning.
For the successful application of ML in health informatics a comprehensive understanding of the whole HCI-KDD-pipeline, ranging from the physical data ecosystem to the understanding of the end-user in the problem domain is necessary. In the medical world the inclusion of privacy, data protection, safety and security is mandatory.
Differentiation from and bridging to existing courses:
At the TU Vienna are currently two courses on “machine learning”, i.e.
184.702 3VU Machine Learning, in winter term, which deals mainly with principles of supervised and unsupervised ML, including pre-processing and data preparation, as well as evaluation of Learning Systems. ML models discussed may include e.g. Decision Tree Learning, Model Selection, Bayesian Networks, Support Vector Machines, Random Forests, Hidden Markov Models, as well as ensemble methods;
183.605 3VU Machine Learning for Visual Computing, in winter term, which deals mainly with linear models for regression and classification (Perceptron, Linear Basis Function Models, RBF, historical overview), applications in computer vision, neural nets, error functions and optimization (e.g., pseudo-inverse, gradient descent, newton method), model complexity, regularization, model selection, VC dimension, kernel methods: duality, sparsity, Support Vector Machine, principal component analysis and Hebbian rule, canonical correlation analysis, bayesian view of the above models, Bayesian regression, relevance vector machine, clustering und vector quantization (e.g., k-means);
Besides from focusing on health informatics (biological, biomedical, medical, clinical) and health related problems, we will build on and refer to the courses above, to avoid any parallelization, thus will particularly focus on solving problems of health with other ML-approaches (both aML and iML).
Consequently, this course is an addtional benefit for the students of computer science to foster machine learning and to show some examples in the important area of health informatics which is currently a hot topic internationally and opens a lot of future opportunities.
|No.:||Date:||Time:||Location:||Agenda:||Lecture Slides:||Additional Material:|
|Machine Learning meets health informatics, introduction, challenges and future directions.|| [[Course Slides, pdf, 7521KB]]||[ML-Anonymization, pdf, 1835KB]|
|17:00- 20:00||Sem.R. DA grün 06B Gebäude D Freihaus Wiedner Hauptstrasse 8||Health Data Jungle:|
Selected Topics on Fundamentals of Data and Information Entropy
| [Course Slides, pdf, 8024KB]||[Mathematical Notations, pdf, 216KB]|
|17:00- 20:00||Sem.R. DA grün 06B Gebäude D Freihaus Wiedner Hauptstrasse 8||Dimensionality Reduction and Subspace Clustering:|
Example for the Doctor-in-the-Loop
|[Course Slides, pdf, 6331KB]|| [Assignment-2, pdf, 129KB]|
[Link to Github]
|17:00- 20:00||Sem.R. DA grün 06B Gebäude D Freihaus Wiedner Hauptstrasse 8||Human Learning vs. Machine Learning, Decision Making under Uncertainty and Reinforcement Learning||[Course Slides, pdf, 5790KB]|
|17:00- 20:00||Sem.R. DA grün 06B Gebäude D Freihaus Wiedner Hauptstrasse 8||Probabilistic Graphical Models Part 1: From Knowledge Representation to Graph Model Learning||[Course Slides, pdf, 11200KB]||[Tutorial Slides, pdf, 1200KB]|
|17:00- 20:00||Sem.R. DA grün 06B Gebäude D Freihaus Wiedner Hauptstrasse 8||Probabilistic Graphical Models Part 2: From Graphical Learning to Graph Bandits||[Course Slides, pdf, 9999KB]|
|17:00- 20:00||Sem.R. DA grün 06B Gebäude D Freihaus Wiedner Hauptstrasse 8||Evolutionary computing for solving problems in health informatics Part 1||[Course Slides, pdf, 2129KB]|
|17:00- 20:00||Sem.R. DA grün 06B Gebäude D Freihaus Wiedner Hauptstrasse 8||Evolutionary computing for solving problems in health informatics Part 2||[Course Slides, pdf, 9522KB]|
|17:00- 20:00||Sem.R. DA grün 06B Gebäude D Freihaus Wiedner Hauptstrasse 8||Selected Topics on Privacy Aware Machine Learning||[Course Slides, pdf, 2743KB]||[Assignment Privacy Aware ML]|
|17:00- 20:00||Sem.R. DA grün 06B Gebäude D Freihaus Wiedner Hauptstrasse 8||On Grand Challenges: Multi-Task Learning and Transfer Learning||[Course Slides, pdf, 5100KB]||[Tutorial Slides, pdf, 1108KB]|
|17:00- 20:00||Sem.R. DA grün 06B Gebäude D Freihaus Wiedner Hauptstrasse 8||Selected Topics on Tumor Growth Learning||[Tumor Tutorial Slides, pdf, 3510KB]|
Week 11 Reading – Machine Learning meets health informatics: challenges and future directions
- Holzinger, A. 2016. Interactive Machine Learning for Health Informatics: When do we need the human-in-the-loop? Springer Brain Informatics, 1-13. doi: doi: 10.1007/s40708-016-0042-6
- Dossier: HOLZINGER (2016) Dossier interactive Machine Learning Health Informatics
- Watch the video of Google DeepMindHealth: https://youtu.be/teZ2m5oTKwM
- “Medicine is so complex, the challenges are so great … we need everything that we can bring to make our diagnostics more precise, more accurate and our therapeutics more focused on that patient.” Sir Malcolm GRANT, NHS England, in: Machine learning : ROYAL SOCIETY Conference report, Part of the conference series Breakthrough science and technologies Transforming our future), https://royalsociety.org/topics-policy/projects/machine-learning
Watch the videos: https://www.youtube.com/playlist?list=PLg7f-TkW11iX3JlGjgbM2s8E1jKSXUTsG
Week 15 Reading – Health Data Jungle: Selected Topics on Fundamentals of Data and Information Entropy
Keywords in this lecture: data – underlying physics of data; biomedical data sources – taxonomy of biomedical data; biomedical data structures – data integration, data fusion in the life sciences; clinical view on data, information, knowledge; information – probabilistic information; information theory – information entropy; cross-entropy – Kullback-Leibler divergence; mutual information – Pointwise Mutual Information (PMI);
- Holzinger, A., Dehmer, M. & Jurisica, I. (2014). Knowledge Discovery and interactive Data Mining in Bioinformatics – State-of-the-Art, future challenges and research directions. BMC Bioinformatics, 15, (S6), I1. doi:10.1186/1471-2105-15-S6-I1
- Holzinger, A., Hörtenhuber, M., Mayer, C., Bachler, M., Wassertheurer, S., Pinho, A. & Koslicki, D. (2014). On Entropy-Based Data Mining. In: Holzinger, A. & Jurisica, I. (eds.) Interactive Knowledge Discovery and Data Mining in Biomedical Informatics, Lecture Notes in Computer Science, LNCS 8401. Berlin Heidelberg: Springer, pp. 209-226. doi:10.1007/978-3-662-43968-5_12
Online available: https://pure.tugraz.at/portal/files/3108669/HOLZINGER_Entropy_based_data_mining.pdf
- Bigi, B. (2003). Using Kullback-Leibler Distance for Text Categorization. In: Sebastiani, F. (ed.) Advances in Information Retrieval: 25th European Conference on IR Research, ECIR 2003, Pisa, Italy, April 14–16, 2003. Proceedings. Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 305-319. doi:
- De Boer, P.-T., Kroese, D. P., Mannor, S. & Rubinstein, R. Y. 2005. A tutorial on the cross-entropy method. Annals of operations research, 134, (1), 19-67. doi:10.1007/s10479-005-5724-z
- Loshchilov, Ilya, Schoenauer, Marc & Sebag, Michele (2013). KL-based Control of the Learning Schedule for Surrogate Black-Box Optimization. arXiv:1308.2655.
Additional reading to foster a deeper understanding of information theory related to the life sciences:
- Manca, Vincenzo (2013). Infobiotics: Information in Biotic Systems. Heidelberg: Springer. (This book is a fascinating journey through the world of discrete biomathematics and a continuation of the 1944 Paper by Erwin Schrödinger: What Is Life? The Physical Aspect of the Living Cell, Dublin, Dublin Institute for Advanced Studies at Trinity College)
Week 17 Reading – Dimensionality Reduction and Subspace Clustering: Example for the Doctor-in-the-Loop
Keywords in this lecture: curse of dimensionality, feature spaces, feature selection, feature extraction, dimensionality reduction (PCA, ICA, FA, MDS, LDA (supervised), Isomap, LLE (laplacian eigenmaps), autoencoders, main focus on: subspaces, subspace clustering;
- Hund, M., Böhm, D., Sturm, W., Sedlmair, M., Schreck, T., Ullrich, T., Keim, D. A., Majnaric, L. & Holzinger, A. 2016. Visual analytics for concept exploration in subspaces of patient groups: Making sense of complex datasets with the Doctor-in-the-loop. Brain Informatics, 1-15: doi: 10.1007/s40708-016-0042-6
- Kriegel, H. P., Kroger, P. & Zimek, A. 2009. Clustering High-Dimensional Data: A Survey on Subspace Clustering, Pattern-Based Clustering, and Correlation Clustering. ACM Transactions on Knowledge Discovery from Data (TKDD), 3, (1), 1-58, doi:10.1145/1497577.1497578.
- Parsons, L., Haque, E. & Liu, H. 2004. Subspace clustering for high dimensional data: a review. SIGKDD Explor. Newsl., 6, (1), 90-105, doi:10.1145/1007730.1007731. [link to pdf]
- Vidal, Rene 2011. Subspace Clustering. IEEE Signal Processing Magazine, 28, (2), 52-68, doi:10.1109/msp.2010.939739. [link to pdf]
- Koch, I. 2014. Analysis of Multivariate and High-Dimensional Data, New York, Cambridge University Press.
Week 18 Reading – Human learning vs. Machine learning:
Decision Making under Uncertainty and Reinforcement Learning
RL is in principle decision making under uncertainty and has its roots in early psychology (Pavlov, Thorndike, Skinner) and cognitive neuroscience, consequently RL can be seen as a bridge between brains and computers.
Keywords in this lecture: cognition as probabilistic inference, associative learning, memory, attention, concept learning, reasoning, causal inference, decision making and decision support (highly important for health informatics); single-agent RL, multi-agent RL, n-armed bandits, multi-agent-reinforcement learing (MARL);
- Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S. & Hassabis, D. 2015. Human-level control through deep reinforcement learning. Nature, 518, (7540), 529-533.
- Niv, Y., Daniel, R., Geana, A., Gershman, S. J., Leong, Y. C., Radulescu, A. & Wilson, R. C. 2015. Reinforcement learning in multidimensional environments relies on attention mechanisms. The Journal of Neuroscience, 35, (21), 8145-8157.
- Vaidya, A. R. 2015. Neural Mechanisms for Undoing the “Curse of Dimensionality”. The Journal of Neuroscience, 35, (35), 12083-12084. http://www.jneurosci.org/content/35/35/12083.full
- Kaelbling, L. P., Littman, M. L. & Moore, A. W. 1996. Reinforcement learning: A survey. Journal of Artificial Intelligence Research, 4, 237-285. https://www.jair.org/media/301/live-301-1562-jair.pdf
- Sutton, R. S. & Barto, A. G. 1998. Reinforcement learning: An introduction, Cambridge, MIT press.
- Littman, M. L. 2015. Reinforcement learning improves behaviour from evaluative feedback. Nature, 521, (7553), 445-451.
- Space Invades (1978): https://www.youtube.com/watch?v=437Ld_rKM2s
- Space Invaders (2015) Deep Mind: https://www.youtube.com/watch?v=iqXKQf2BOSE
- OpenAI Gym: A toolkit for development and comparison of reinforcement learning algorithms
- MMLF – Maja Machine Learning Framework in Python
Week 19 Reading – Probabilistic Graphical Models Part 1: From Knowledge Representation to Graph Model Learning
Keywords in this lecture: Reasoning under uncertainty, graph extraction, network medicine, metrics and measures, point-cloud data sets, graphical model learning
Introduction to Graphical Models:
- KOLLER, Daphne & FRIEDMAN, Nir (2009) Probabilistic graphical models: principles and techniques. Cambridge (MA): MIT press.
- Bishop, Christopher M (2007) Pattern Recognition and Machine Learning. Heidelberg: Springer [Chapter 8: Graphical Models]
- Wainwright, Martin J. & Jordan, Michael I. (2008) Graphical Models, Exponential Families, and Variational Inference. Foundations and Trends in Machine Learning, Vol.1, 1-2, 1-305, doi: 10.1561/2200000001 [Link to pdf]
A hot topic in ML are graph bandits:
Week 20 Reading – Probabilistic Graphical Models Part 2: From Bayesian Networks to Graph Bandits
Keywords in this lecture: Graphical Models and Decision Making, structure learning, factor graphs, function prediction, protein network inference, graph-isomorphism, Bayes’ Nets, machine learning on graphs, subgraph discovery, similarity, correspondence, Gromov-Hausdorff distance, topolopgical spaces in weakly structured data, probabilistic topic models, LDA, graph bandits, rare diseases, randomized clincial trials, bandit strategies, dynamic programming;
Murphy, K. P. 2012. Machine learning: a probabilistic perspective, MIT press. Chapter 26 (pp. 907) – Graphical model structure
Week 22 Reading – Evolutionary Algorithms for solving problems in health informatics, Part 1
Keywords in this lecture: computational intelligence (CI), evolutionary computing (EC), genetic algorithms (GA), medical decison making as a search problem, heuristics vs. analytics, diagnostic reasoning, evolutionary principles, biology as natural engineering, theory of evolution, Lamarck, Darwin, Baldwin, Mendel, biological universe vs. computational universe, Weighted Naive Bayesian, evaluation function=fitness function, k-armed bandits
Week 23 Reading – Evolutionary Algorithms for solving health informatics problems – Part 2
Keywords in this lecture: Medical Applications of evolutionary algorithms, Nature-Inspired Computing, NP-hard problems, Ant-Colony Optimization, Simulated Annealing, Problem solving: Human versus Computer, Solving NP-hard problems with the human in the loop, Multi-Agent-Hybrid Systems, Neuroevolution;
also highly recommendable from Jason BROWNLEE is: http://machinelearningmastery.com/machine-learning-in-python-step-by-step/
Scholarpedia Entry Neuroevolution: http://www.scholarpedia.org/article/Neuroevolution
Neuorevolution – a car learns to drive: https://www.youtube.com/watch?v=5lJuEW-5vr8
Neuroevolution – evolving creatures: https://www.youtube.com/watch?v=oiVNxAGC70Q
Genetic Evolution of a wheeled vehicle: https://www.youtube.com/watch?v=uxourrlPlf8
Neuroevolution by Reza Mahjourian: https://github.com/nnrg/opennero/wiki/NeuroEvolution
Neural Networks Research Group at University of Texas: http://nn.cs.utexas.edu/
Week 24 Reading – Privacy Aware Machine Learning
Keywords in this lecture: Privacy, Safety, Security, Data Protection in Machine Learning for Health Informatics
Week 25 Reading – Multi-Task Learning and Transfer Learning
Keywords in this lecture: Multi-task learning, transfer learning,
Week 26 Reading – Tumor Growth Simulation
Keywords in this lecture: tumor growth learning, tumor development, simulation and visualization
Short Bio of Lecturer:
Andreas HOLZINGER <expertise> is head of the Holzinger Group, HCI-KDD, Institute for Medical Informatics, Statistics and Documentation, Medical University Graz; and Assoc.Prof (Univ.-Doz.) at the Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Institute of Information Systems and Computer Media. His research interests are in supporting human intelligence with machine learning to help to solve complex problems in biomedical informatics and the life sciences. Andreas obtained a Ph.D. in Cognitive Science from Graz University in 1998 and his Habilitation (second Ph.D.) in Computer Science from Graz University of Technology in 2003. Andreas was Visiting Professor in Berlin, Innsbruck, London (2 times), and Aachen. He was program co-chair of the 14th IEEE International Conference on Machine Learning and Applications of the Association for Machine Learning and Applications (AMLA), and is Associate Editor of the Springer Journal Knowledge and Information Systems (KAIS), Springer Brain Informatics (BRIN), BMC Medical Informatics and Decision Making (MIDM), and founder and leader of the international expert network HCI-KDD. Andreas is member of the IFIG WG 12.9. Computational Intelligence and co-chair of the Cross-Disciplinary IFIP CD-ARES 2016 conference, organizing a special session on privacy aware machine learning (PAML) for health data science. Since 2003 he has participated in leading positions in 30+ R&D multi-national projects, budget 4+ MEUR, 6500+ citations, h-index =36, g-index=140; Homepage: http://hci-kdd.org
Short Bio of Tutors:
Marcus BLOICE is finishing his PhD this year with the application of deep learning on medical images.
This tutorial will present the installation and usage of Caffe – see caffe.berkeleyvision.org – a popular deep learning framework developed by the Berkeley Vision and Learning Centre, University of California. It will first discuss how to obtain, compile, and run the Caffe software under Linux on a GPU-equipped workstation. We will then see how, through the use of a mid-range gaming GPU, training times can be reduced by a factor of 20 when compared to a CPU. Lastly, a concrete example of a deep learning task will be presented in the form of a live analysis of a confocal laser scanning microscopy dataset of skin lesion images, by training a model that automatically classifies these images into malignant and benign cases with a high degree of accuracy.
Bernd MALLE is pursuing his PhD with a focus on graph data structures.
The amount of patient-related data produced in today’s clinical setting poses many challenges with respect to collection, storage and responsible use. For example, in research and public health care analysis, data must be anonymized before transfer, for which the k-anonymity measure was introduced and successively enhanced by further criteria. As k-anonymity is an NP-hard problem, modern approaches often make use of heuristics based methods. This talk will convey the motivation for anonymization followed by an outline of its criteria, as well as a general overview of methods & algorithmic approaches to tackle the problem. As the resulting data set will be a tradeoff between utility and individual privacy, we need to optimize those measures to individual (subjective) standards. Moreover, the efficacy of an algorithm strongly depends on the background knowledge of a potential attacker as well as the underlying problem domain. I will therefore conclude the session by contemplating an interactive machine learning (iML) approach, pointing out how domain experts might get involved to improve upon current methods.
Additional study material and reading:
Related Books in Machine Learning:
- MITCHELL, Tom M., 1997. Machine learning, New York: McGraw Hill. (Book Webpages)
Undoubtedly, this is the classic source from the pioneer of ML for getting a perfect first contact with the fascinating field of ML, for undergraduate and graduate students, and for developers and researchers. No previous background in artificial intelligence or statistics is required.
- FLACH, Peter, 2012. Machine Learning: The Art and Science of Algorithms that Make Sense of Data. Cambridge: Cambridge University Press. (Book Webpages)
Introductory for advanced undergraduate or graduate students, at the same time aiming at interested academics and professionals with a background in neighbouring disciplines. It includes necessary mathematical details, but emphasizes on how-to.
- MURPHY, Kevin, 2012. Machine learning: a probabilistic perspective. Cambridge (MA): MIT Press. (Book Webpages)
This books focuses on probability, which can be applied to any problem involving uncertainty – which is highly the case in medical informatics! This book is suitable for advanced undergraduate or graduate students and needs some mathematical background.
- BISHOP, Christopher M., 2006. Pattern Recognition and Machine Learning. New York: Springer-Verlag. (Book Webpages)
This is a classic work and is aimed at advanced students and PhD students, researchers and practitioners, not asuming much mathematical knowledge.
- HASTIE, Trevor, TIBSHIRANI, Robert, FRIEDMAN, Jerome, 2009. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. New York: Springer-Verlag (Book Webpages)
This is the classic groundwork from supervised to unsupervised learning, with many applications in medicine, biology, finance, and marketing. For advanced undergraduates and graduates with some mathematical interest.
To get an understanding of the complexity of the health informatics domain:
- Andreas HOLZINGER, 2014. Biomedical Informatics: Discovering Knowledge in Big Data.
New York: Springer. (Book Webpage)
This is a students textbook for undergraduates, and graduate students in health informatics, biomedical engineering, telematics or software engineering with an interest in knowledge discovery. This book fosters an integrated approach, i.e. in the health sciences, a comprehensive and overarching overview of the data science ecosystem and knowledge discovery pipeline is essential.
- Gregory A PETSKO & Dagmar RINGE, 2009. Protein Structure and Function (Primers in Biology). Oxford: Oxford University Press (Book Webpage)
This is a comprehensive introduction into the building blocks of life, a beautiful book without ballast. It starts with the consideration of the link between protein sequence and structure, and continues to explore the structural basis of protein functions and how this functions are controlled.
- Ingvar EIDHAMMER, Inge JONASSEN, William R TAYLOR, 2004. Protein Bioinformatics: An Algorithmic Approach to Sequence and Structure Analysis. Chicheser: Wiley.
Bioinformatics is the study of biological information and biological systems – such as of the relationships between the sequence, structure and function of genes and proteins. The subject has seen tremendous development in recent years, and there are ever-increasing needs for good understanding of quantitative methods in the study of proteins. This book takes the novel approach of covering both the sequence and structure analysis of proteins and from an algorithmic perspective.
To strenghten your mathematical understanding:
- Dan SIMOVICI & Chabane DJERABA (2014) Mathematical Tools for Data Mining: Set Theory, Partial Orders, Combinatorics, Second Edition. London, Heidelberg, New York, Dordrecht: Springer.
This is a must-have book on every desk, a comprehensive compendium of the maths we need in our daily work, includes topologies and measures in metric spaces.
- Keneth H. ROSEN (2013) Discrete Mathematics and its Applications. New York: McGraw-Hill.
This discrete mathematics course book spans a thread through mathematical reasoning, combinatorial analysis, discrete structures, algorithmic thinking and applications & modeling – very recommendable.
- Richard O. DUDA, Peter E. HART & David G. STORK (2001) Pattern Classification. New York: John Wiley. This is THE classic work from Bayesian Decision Theory, Nonparametric Techniques, Linear Discriminant Functions and Stochastic Methods with a useful and applicable mathematical foundation. A must-have for any data scientist.
- K.F. RILEY, M.P. HOBSON and S.J. BENCE (2006) Mathematical Methods for Physics and Engineering. Third Edition. Cambridge: Cambridge University Press. A very useful book for every undergraduate student, easy to read. Online available via: https://www.andrew.cmu.edu/~gkesden/book.pdf
Amongst the many tools (we will concentrate on Python), some useful and popular ones include:
- WEKA. Since 1993, the Waikato Environment for Knowledge Analysis is a very popular open source tool. In 2005 Weka received the SIGKDD Data Mining and Knowledge Discovery Service Award: it is easy to learn and easy to use [WEKA]
- Mathematica. Since 1988 a commercial symbolic mathematical computation system, easy to use [Mathematica]
- MATLAB. Short for MATrix LABoratory, it is a commercial numerical computing environment since 1984, coming with a proprietary programming language by MathWorks, very popular at Universities where it is licensed, awkward for daily practice [Matlab]
- R. Coming from the statistics community it is a very powerful tool implementing the S programming language, used by data scientists and analysts. [The R-Project]
- Python. Currently maybe the most popular scientific language for ML [Python Software Foundation]
An excellent source for learning numerics and science with Python is: http://www.scipy-lectures.org/
- Julia. Since 2012, raising scientific language for technical computing with better performance than Python. IJulia, a collaboration between the Jupyter and Julia, provides a powerful browser-based graphical notebook interface to Julia. [julialang.org]
Please have a look at: What tools do people generally use to solve problems?
Recommendable reading on tools include:
- Wes McKINNEY (2012) Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython. Beijing et al.: O’Reilly.
This is a practical introduction from the author of the Pandas library. [Google-Books]
- Ivo BALBAERT (2015) Getting Started with Julia Programming. Birmingham: Packt Publishing.
A good start for the Julia language and more focused on scientific computing projects, it is assumed that you already know about a high-level dynamic language such as Python. [Google-Books]
International Courses on Machine Learning:
- Carnegie Mellon University > Machine Learning Course 10-701 2015
by Eric XING (expertise) and Ziv-Bar JOSEPH (expertise)
- Carnegie Mellon University > Machine Learning Course 10-701/15-781 2011
by Tom MITCHELL (expertise)
- Carnegie Mellon University > Machine Learning Course 10-601 2015
by Maria-Florina BALCAN (expertise) and Tom MITCHELL (expertise)
- Carnegie Mellon University > Machine Learning Course 10-701 2013
by Alex SMOLA (expertise)
- Carnegie Mellon University > Machine Learnigng Course 10601b 2015
by Seyoung KIMhttp://www.cs.cmu.edu/~10601b/
- Cornell University > Machine Learning CS 4780/5780 2014
by Thorsten JOACHIMS (expertise)
- Oxford > Department of Computer Science > Machine Learning: 2014-2015
by Nando de FREITAS (expertise)
Conferences, Workshops and Courses related to Machine Learning for Health Care
- NIPS 2015, Workshop on Machine Learning in Healthcare, Montreal (CA)
- IET Conference on Machine Learning in Healthcare, Balliol College, Oxford (UK)
- Unsing Machine Learning in Health Research, UCL London (UK)
A) Students with a GENERAL interest in machine learning should definitely browse these sources:
1) TALKING MACHINES – Human conversation about machine learning by Katherine GORMAN and Ryan P. ADAMS <expertise>
excellent audio material – 24 episodes in 2015 and three new episodes in season two 2016 (as of 14.02.2016)
2) VIDEOLECTURES.NET Machine learning talks (3,508 items up to 14.02.2016) ML is grouped into subtopics
and displayed as map
3) TUTORIALS ON TOPICS IN MACHINE LEARNING by Bob Fisher from the University of Edinburgh, UK
B) Students with a SPECIFIC interest in interactive machine learning should have a look at:
We are looking forward working with you