A report setting out scenarios for the future of research may have missed a chance to help carve out a new career path in structured research, a group of research professionals heard last week.
At Science and Engineering South’s Research Data Café, a group of researchers and research management professionals, discussed the Research 4.0 report by think-tank Demos and education infrastructure organisation Jisc, and the implications of the report for their roles.
The report, which considers how Artificial Intelligence (AI) will influence the future of research, says an explosion of new digital data sources with powerful new analytical tools is allowing researchers to investigate questions that would have been unanswerable a decade ago. But it found that the capacity of digital infrastructure in UK universities also appears to vary significantly.
The report added that those researchers who are using these new technologies are not receiving appropriate recognition for taking on tasks that are not easily automated, yet are essential for AI to work, such as data cleaning, data annotation, curation and model building.
Participants in the Research Data Café welcomed the report’s recognition of the value of these time-consuming tasks. However, one person expressed disappointment that the report did not build on this recognition by highlighting how management of research data sets involves distinct skillsets, with a recommendation linked to developing a career pathway around structured research.
The participant added that there were gaps around AI and ethics. “Any usage of AI increases the amount of ethical consideration you need. We already need more ethical consideration than we currently give. We also need to think about the data itself – where do the ethics come in and how do we embed it at this stage?”
Gaps were also identified when it came to explainable AI – where decisions made by machines are justified – and conversations around reproducibility of research. How can we guarantee that research reliant on AI methods will be reproducible in the future?
One participant thought there should be a more explicit recognition that humans and machine are co-workers, with machines doing the tasks they are best suited with human input for the more nuanced tasks. “Rather than one replacing the other, we should be aiming for augmented intelligence, combining the best of human and machine,” they reflected afterwards.
The participants were supportive of the report’s support of democratising AI, with one noting that some “gate keeping” currently existed, despite some AI specialists being willing to collaborate.
“There is a feeling amongst specialists in AI that it is dangerous to let other researchers loose on AI because of perceptions that they don’t know what is going on in the background,” observed one participant.
Others pointed to good examples of collaboration, particularly in areas such as public health where it is now accepted that the team must include a range of disciplines including social scientists and data scientists. They added that COVID has accelerated this trend and old resistances to working across silos was now breaking down.
But it was also noted that it is often difficult for multidisciplinary research teams to find available AI expertise. “You need to have someone who knows what is going on in the ‘black box’,” said one participant. “One thing that is missing is a career path where that expertise can be developed.”
One participant welcomed the report’s recommendation about encouraging greater collaboration between academia and industry. “There’s often the issue that companies have a lot of data, often more than they know what to do with and they are looking to use it for a specific purpose,” she said.
“Academics need data and might potentially look at that data in a different way because they would be using it for their research, rather than what the companies are using it for. And given that one of the big issues in AI is lack of access to good quality data, this seems very important.”