Awesome, not awesome.
#Awesome
"Researchers at MIT and elsewhere launched a research project to identify and tackle machine learning usability challenges in child welfare screening. In collaboration with a child welfare department in Colorado, the researchers studied how call screeners assess cases, with and without the help of machine learning predictions. Based on feedback from the call screeners, they designed a visual analytics tool that uses bar graphs to show how specific factors of a case contribute to the predicted risk that a child will be removed from their home within two years.
The researchers found that screeners are more interested in seeing how each factor, like the child’s age, influences a prediction, rather than understanding the computational basis of how the model works. Their results also show that even a simple model can cause confusion if its features are not described with straightforward language.
These findings could be applied to other high-risk fields where humans use machine learning models to help them make decisions, but lack data science experience, says senior author Kalyan Veeramachaneni, principal research scientist in the Laboratory for Information and Decision Systems (LIDS) and senior author of the paper." - Adam Zewe, Writer Learn More from MIT News >
#Not Awesome
"A scientist who wrote a leading textbook on artificial intelligence has said experts are “spooked” by their own success in the field, comparing the advance of AI to the development of the atom bomb.
Prof Stuart Russell, the founder of the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley, said most experts believed that machines more intelligent than humans would be developed this century, and he called for international treaties to regulate the development of the technology." - Nicola Davis, Science Correspondent Learn More from The Guardian >
|