3,2 Trillion USD on health per year

The U.S. spends more on health care than any other country

Dieleman et al. (2016) just (Dec, 27, 2016) published a paper [1] which discusses data from the National Health Expenditure Accounts to estimate US spending on personal health care and public health, according to condition, age and sex group, and type of care. This paper was mentioned in the Washington Post by Carolyn Y. Johnson on December 27 at 11:00 AM

Here a link to the original paper:

[1] Dieleman JL, Baral R, Birger M, Bui AL, Bulchis A, Chapin A, Hamavid H, Horst C, Johnson EK, Joseph J, Lavado R, Lomsadze L, Reynolds A, Squires E, Campbell M, DeCenso B, Dicker D, Flaxman AD, Gabert R, Highfill T, Naghavi M, Nightingale N, Templin T, Tobias MI, Vos T, Murray CJL. US Spending on Personal Health Care and Public Health, 1996-2013. JAMA. 2016;316(24):2627-2646. doi:10.1001/jama.2016.16885

Here the article (shortened) from the Washington Post:

American health-care spending, measured in trillions of dollars, boggles the mind. Last year, we spent $3.2 trillion on health care  a number so large that it can be difficult to grasp its scale.

A new study published in the Journal of the American Medical Association reveals what patients and their insurers are spending that money on, breaking it down by 155 diseases, patient age and category — such as pharmaceuticals or hospitalizations. Among its findings:

  • Chronic — and often preventable — diseases are a huge driver of personal health spending. The three most expensive diseases in 2013: diabetes ($101 billion), the most common form of heart disease ($88 billion) and back and neck pain ($88 billion).
  • Yearly spending increases aren’t uniform: Over a nearly two-decade period, diabetes and low back and neck pain grew at more than 6 percent per year — much faster than overall spending. Meanwhile, heart disease spending grew at 0.2 percent.
  • Medical spending increases with age — with the exception of newborns. About 38 percent of personal health spending in 2013 was for people over age 65. Annual spending for girls between 1 and 4 years old averaged $2,000 per person; older women 70 to 74 years old averaged $16,000.

Here the link to the original article:
https://www.washingtonpost.com/news/wonk/wp/2016/12/27/the-u-s-spends-more-on-health-care-than-any-other-country-heres-what-were-buying/?tid=pm_business_pop&utm_term=.71fc517cdc11

machine learning for health informatics

LNAI 9605 Machine Learning for Health Informatics available

14.12.2016 LNAI 9605 just appeared

Machine Learning for Health Informatics Lecture Notes in Artificial Intelligence LNAI 9605

Holzinger, Andreas (ed.) 2016. Machine Learning for Health Informatics: State-of-the-Art and Future Challenges. Cham: Springer International Publishing, doi:10.1007/978-3-319-50478-0

[book homepage]

Machine learning (ML) is the fastest growing field in computer science, and Health Informatics (HI) is amongst the greatest application challenges, providing future benefits in improved medical diagnoses, disease analyses, and pharmaceutical development. However, successful ML for HI needs a concerted effort, fostering integrative research between experts ranging from diverse disciplines from data science to visualization.

Tackling complex challenges needs both disciplinary excellence and cross-disciplinary networking without any boundaries. Following the HCI-KDD approach, in combining the best of two worlds, it is aimed to support human intelligence with machine intelligence.

This state-of-the-art survey is an output of the international HCI-KDD expert network and features 22 carefully selected and peer-reviewed chapters on hot topics in machine learning for health informatics; they discuss open problems and future challenges in order to stimulate further research and international progress in this field.

NIPS 2016

NIPS 2016 is over

A crazy 5700-people event is over: NIPS 2016 in Barcelona. Registration on Sunday, 4th December, on Monday, 5th traditionally the tutorials were presented concluded by the first keynote talk given by Yann LeCun (now director at Facebook AI research) and completed by the official opening and the first poster presentation.  On Tuesday, Dec 6th, after starting with a keynote by Drew Purves (Google Deep Mind), parallel tracks on clustering and graphical models took place concluded by a keynote given by Saket Nevlakha (The Salk Institute) and completed by parallel tracks on deep learning and machine learning theory and poster sessions and demonstrations. Wednesday was openend by a keynote from Kyle Cranmer (New York University), the award talk “matrix completion has no spurious local min” and dominated by parallel tracks on algorithms and applications, concluded by a keynote by Marc Raibert (Boston Dynamics) who presented advances in latest robotic learning, and parallel tracks on deep learning and optimization, completed by the poster sessions with cool demonstrations. The Thursday was opened by a keynote fromm Irina Rish (IBM) and Susan Holmes (Stanford), followed by parallel tracks on interpretable models and cognitive neuroscience, concluded by various symposia until 21:30! Friday and Saturday were the whole day workshops – the sunday was reserverd for recreation on the sand beach of Barcelona 🙂

NIPS is definitely the most exciting conference with amazing variety on topics and themes revolving in machine learning with all sorts of theory and applications.

nips-2016-barcelona-machine-learningnips-2016-barcelona-machnine-learning-gamification

Machine Learning with Fun

Google Research hosts a number of very interesting so-called A.I. experiments. There you can play with machine learning algorithms in a very amusing way. A recent example is QUICK, DRAW *). This is an online guessing game that challenges humans to hand sketch (called doodles) a given object. The game uses a  neural network to learn from the input data

https://quickdraw.withgoogle.com

which is part of the A.I. Experiments platform:

https://aiexperiments.withgoogle.com

and here the explanatory video:
https://www.youtube.com/watch?v=oOwfiYnRi5c

Have fun and enjoy!

Here you see more than 100.000 hedgehog drawings made by humans on the internet:

https://quickdraw.withgoogle.com/data/hedgehog

*) not to be confused with QuickDraw [1], which is a sketch-based drawing tool facilitating to draw precise geometry diagrams,  and can automatically recognize sketched diagrams containing components such as line segments and circles, infer geometric constraints relating recognized components, and use this information to “beautify” the sketched diagram. This “Beautification” is based on an algorithm that iteratively computes various sub-components of the components using an extensible set of deductive rules.

[1] Cheema, S., Gulwani, S. & Laviola, J. QuickDraw: improving drawing experience for geometric diagrams. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2012. ACM, 1037-1064. doi: 10.1145/2207676.2208550

[2] https://experiments.withgoogle.com/ai

Visualization of High Dimensional Data

Google is doing experiments with visualization of high dimenisonal data. This experiment helps visualize what’s happening in machine learning. It allows coders to see and explore their high-dimensional data. The goal is to eventually make this an open-source tool within TensorFlow, so that any coder can use these visualization techniques to explore their data.
Built by Daniel Smilkov, Fernanda Viégas, Martin Wattenberg, and the Big Picture team at Google:
This work is based on a method developed by Laurens van der Maaten & Geoffrey Hinton in 2008:
Maaten, L. V. D. & Hinton, G. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9, 11, 2579-2605, http://www.jmlr.org/papers/v9/vandermaaten08a.html
t-Distributed Stochastic Neighbor Embedding (t-SNE, spoken: Disney) is a (prize-winning) nonlinear technique for dimensionality reduction that is particularly well suited for the visualization of high-dimensional data sets into R2 or R3. The technique can be implemented via Barnes-Hut approximations, allowing it to be applied on large real-world datasets (“big data”).
For details please refer directly to:
Compare this method to our own work on subspace clustering:
Neural Information Processing Systems

Holzinger Group at NIPS

Our crazy iML-Concept has been accepted at the CiML 2016 workshop (organized by Isabelle Guyon, Evelyne Viegas, Sergio Escalera, Ben Hammer & Balazs Kegl) at NIPS 2016 (December, 5-10, 2016)  in Barcelona:

https://docs.google.com/viewer?a=v&pid=sites&srcid=Y2hhbGVhcm4ub3JnfHdvcmtzaG9wfGd4OjFiMGRmNzQ5MmM5MTZhYzE

Obama on humans-in-the-loop

How artificial intelligence will affect jobs

In an discussion with Barack OBAMA [1] on how artificial intelligence will affect jobs, he emphasized how important human-in-the-loop machine learning will become in the future. Trust, transparency and explainabiltity will be THE driving factors of future AI solutions! The discussion interview was led by the Wired [2] Editor Scott DADICH, and MIT Media Lab [3] Director Joi ITO. I recommend my students to watch the full video. Barack Obama demonstrates a  good understanding of the field and indicates indirectly the importance of our research in the the human-in-the-loop approach [4], despite all progress towards fully automatic approaches and autonomous systems.

More information see:

[1] Barack Obama was the 44th President of the United States of America and was in office from January, 20, 2009 to January, 20, 2017. He was born August, 4, 1961 in Honolulu (Hawaii)

[2] Wired is a monthly tech magazine which reports since 1993 on how emerging technologies may affect culture, politics, economics. Very interesting to note is that Wired is known for coning the popular terms “long tail” and “crowdsourcing”. https://www.wired.com

[3] The MIT Media Lab is an interdisciplinary research lab at the Massachusetts Institute of Technology in Cambridge (MA), which is part of the Boston metropolitan area in the north, just across the Charles River – not far way from the Harvard Campus.

[4] Holzinger, A., Plass, M., Holzinger, K., Crisan, G.C., Pintea, C.-M. & Palade, V. 2017. A glass-box interactive machine learning approach for solving NP-hard problems with the human-in-the-loop. arXiv:1708.01104

 

 

Google releases their Syntactic Parser Open Source

Google researchers spend a lot of time thinking about how computer systems can read and understand human language in order to process it in intelligent ways. On May, 12, 2016 Slav Petrov (expertise) based in New York and leading the machine learning for natural language group (Slav Petrov Page), announced that they released SyntaxNet as an open-source neural network framework implemented in TensorFlow that provides a new foundation for Natural Language Understanding (NLU) . The release includes all code needed to train new SyntaxNet models on own data, as well as Parsey McParseface, an English parser that the Googlers have trained and that can be used to analyze English text. Parsey McParseface is built on powerful machine learning algorithms that learn to analyze the linguistic structure of language, and that can explain the functional role of each word in a given sentence.

Read more:
http://googleresearch.blogspot.co.at/2016/05/announcing-syntaxnet-worlds-most.html

Literature:

Andor, D., Alberti, C., Weiss, D., Severyn, A., Presta, A., Ganchev, K., Petrov, S. & Collins, M. 2016. Globally normalized transition-based neural networks. arXiv preprint arXiv:1603.06042.

Petrov, S., Mcdonald, R. & Hall, K. 2016. Multi-source transfer of delexicalized dependency parsers. US Patent 9,305,544.

Weiss, D., Alberti, C., Collins, M. & Petrov, S. 2015. Structured Training for Neural Network Transition-Based Parsing. arXiv:1506.06158.

Vinyals, O., Kaiser, Ł., Koo, T., Petrov, S., Sutskever, I. & Hinton, G. Grammar as a foreign language. Advances in Neural Information Processing Systems, 2015. 2755-2763.

Deep Learning Playground openly available

TensorFlow – part of the Google brain project – has recently open sourced on GitHub a nice playground for testing and learning the behaviour of deep learning networks, which also can be used following the Apache Licence:

http://playground.tensorflow.org

Background: TensorFlow is an open source software library for machine learning. There is a nice video “large scale deep learning” by Jeffrey Dean.  TensorFlow is  an interface for expressing machine learning algorithms along with an implementation for executing such algorithms on a variety of heterogeneous systems, ranging from smartphones to high-end computer clusters and  grids of thousands of computational devices (e.g. GPU). The system has been used for research in various areas of computer science (e.g. speech recognition, computer vision, robotics, information retrieval, natural language processing, geographic information extraction, computational drug discovery). The TensorFlow API and a reference implementation were released as an open-source package under the Apache 2.0 license on 9th November 2015 and is available at www.tensorflow.org

Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J. & Devin, M. 2016. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. arXiv preprint arXiv:1603.04467.

It is also discussed on episode 24 of talking machines.