According to the British “Financial Times” report, a group of top scientists and medical statisticians warned on Friday that the use of artificial intelligence technology in some biomedical fields will lead to some inaccuracies in conclusion.
“Many of the research conclusions from the analysis of big data using machine learning technology can’t gain my trust.” Genevera Allen, associate professor of Baylor College of Medicine at Rice University in the United States, at the American Association for the Advancement of Science. The annual meeting warned.
Machine learning has been used to study the relationship between scientific and medical data and certain phenomena, such as the association between genes and disease. In precision medicine, researchers look for patients with similar DNA to target treatment-specific genes.
“A lot of technology is for forecasting,” Allen said. “But I never returned the conclusions of ‘I don’t know’ or ‘I didn’t find anything’ because they were not considered in the design process.”
She is not willing to point out specific cases, but said that the conclusions of machine learning on cancer data are good examples.
“There are many cases that cannot be repeated,” Allen said. “The clusters found in one study are quite different from those found in another study. Why is this happening? Because most machine learning technologies today say: ‘ I found a group. ‘But sometimes, if it is more helpful, it can be said: ‘I think some of them are really divided into groups, but I am not sure about others.'”
Once machine learning finds a specific connection between patient genes and disease characteristics, human researchers may provide a reasonable scientific explanation for the corresponding findings. But this does not mean that these findings are correct.
Allen said: “You can always find a reason to explain why certain genes are grouped together.”
Computer scientists have only recently begun to realize this problem, which may lead medical researchers to the wrong path and waste resources to confirm unrepeatable results.
Allen and her colleagues are working to improve statistical techniques and machine learning techniques so that artificial intelligence can criticize their own data analysis and point out how likely it is that some of the findings are real, not immediately relevant.
“There is an idea to specifically disrupt the data and see if the results will remain the same,” she said.