Discussion about this post

User's avatar
Tom Stafford's avatar

One driver of the move towards technoscience is the reproducibility crisis (aka "credibility revolution"). In some fields this has been a questioning of many of the phenomena reported in papers (e.g. psychology), but in others it is less a concern about the reliability of reports, per se, and more a concern about the scaling / translation of phenomena "out of the lab" (e.g. computer science). The causes are myriad, but the focus on papers ("not research, just adverts for research") is certainly one, and the mitigation is more open research practices, and the sharing of the tools and processes which produce research findings, along with the findings. This is what looks a lot like your description of technoscience.

Lots more could be said, but I'll leave it at saying that funders could do a lot more to help research communities help themselves to produce research which is more reliable, more scalable and more translatable to different contexts.

Valentin Guigon's avatar

I'm not sure I'm convinced by the antagonism of publishing vs producing novelty forms of outputs, nor by young researchers harnessing the cutting-edge tools vs older researchers.

First, with the increasing demands for open-science, many of the by-products of research (such as data, methodologies, etc) are intended to be shared. This necessity to make things public creates an incentive for developing new frameworks, pipelines, protocols, organizations, and to create complex datasets, as they are intended for reusability. The FAIR principles, MLOps, DevOps, and the interdisciplinarity at large are pushing the community to adopt this vision. At the same time, we have plenty tools developed by the research community to automatize QC, metadata, reporting, etc. This decreases the cost of formatting the research by-products.

Second, I agree that young researchers are by design more oriented towards new technologies - during my PhD in neuroscience (2018-2022), there has been a first wave of students willing to go away from the traditional t-tests and ANOVA, in favour of more complex models. Then a second wave going for Bayesian inference. Then the transformers models arrived, which brought the possibility to treat qualitative data to extract new dimensions. I was able to rapidly learn and incorporate these new models in my work thanks to my personal drive as a young researcher, and to my PhD advisors vision of a good science. Now, looking back on their most recent works, I can see they have been pretty fast at adopting those technologies and pushing them in directions I personally couldn't think of - likely due to their experience and my lack of.

1 more comment...

No posts

Ready for more?