One driver of the move towards technoscience is the reproducibility crisis (aka "credibility revolution"). In some fields this has been a questioning of many of the phenomena reported in papers (e.g. psychology), but in others it is less a concern about the reliability of reports, per se, and more a concern about the scaling / translation of phenomena "out of the lab" (e.g. computer science). The causes are myriad, but the focus on papers ("not research, just adverts for research") is certainly one, and the mitigation is more open research practices, and the sharing of the tools and processes which produce research findings, along with the findings. This is what looks a lot like your description of technoscience.
Lots more could be said, but I'll leave it at saying that funders could do a lot more to help research communities help themselves to produce research which is more reliable, more scalable and more translatable to different contexts.
So a key question seems to be ‘what forms of scientific (and technical) output most support reproducibility and challenge’? The answer is probably a proper synthesis of open tools and structured scientific rationale. Either to exclusion presents a problem.
This is an exceptionally thoughtful analysis of a crucial but under-examined question in metascience policy. You've identified something that many papers treat as a given—that "technoscience" is inherently good—and subjected it to the kind of rigorous interrogation it deserves.
Your question about the optimal mix of inputs and outputs is particularly sharp. The implicit assumption that more technical resources and tool-building is always better needs to be unpacked. What's the marginal value of the next dollar spent on software development versus traditional analysis? And critically, who decides what the "ideal mix" should be?
I'm also intrigued by your point about generational dynamics. There's definitely something to the idea that calls for "more technoscience" might partly reflect a desire for greater research leadership at earlier career stages, when technical skills are most current. This doesn't invalidate the argument, but it does suggest we should be explicit about whether we're arguing for a different way of doing research or a different distribution of power within research institutions.
The AI dimension you raise is fascinating—the possibility that AI could make certain technical activities so productive that we'd actually need less investment in them rather than more. This would completely flip the conventional wisdom. Worth watching closely.
One driver of the move towards technoscience is the reproducibility crisis (aka "credibility revolution"). In some fields this has been a questioning of many of the phenomena reported in papers (e.g. psychology), but in others it is less a concern about the reliability of reports, per se, and more a concern about the scaling / translation of phenomena "out of the lab" (e.g. computer science). The causes are myriad, but the focus on papers ("not research, just adverts for research") is certainly one, and the mitigation is more open research practices, and the sharing of the tools and processes which produce research findings, along with the findings. This is what looks a lot like your description of technoscience.
Lots more could be said, but I'll leave it at saying that funders could do a lot more to help research communities help themselves to produce research which is more reliable, more scalable and more translatable to different contexts.
So a key question seems to be ‘what forms of scientific (and technical) output most support reproducibility and challenge’? The answer is probably a proper synthesis of open tools and structured scientific rationale. Either to exclusion presents a problem.
This is an exceptionally thoughtful analysis of a crucial but under-examined question in metascience policy. You've identified something that many papers treat as a given—that "technoscience" is inherently good—and subjected it to the kind of rigorous interrogation it deserves.
Your question about the optimal mix of inputs and outputs is particularly sharp. The implicit assumption that more technical resources and tool-building is always better needs to be unpacked. What's the marginal value of the next dollar spent on software development versus traditional analysis? And critically, who decides what the "ideal mix" should be?
I'm also intrigued by your point about generational dynamics. There's definitely something to the idea that calls for "more technoscience" might partly reflect a desire for greater research leadership at earlier career stages, when technical skills are most current. This doesn't invalidate the argument, but it does suggest we should be explicit about whether we're arguing for a different way of doing research or a different distribution of power within research institutions.
The AI dimension you raise is fascinating—the possibility that AI could make certain technical activities so productive that we'd actually need less investment in them rather than more. This would completely flip the conventional wisdom. Worth watching closely.