Technoscience, Metascience and Social Science
How would funders fund differently if they wanted to fund 'technoscience'?
Here’s a common theme I’ve noticed in several recent metascience policy documents*: the claim that one of the reasons it would be good to fund new types of research organisations is that they would do research in a different way.
One term used to describe this different way of doing research (for example in the Tony Blair Institute’s paper on Lovelace Disruptive Invention Labs) is technoscience.
Technoscience seems to mean research that is characterised by:
devoting more of resources to coding, technicians, and data curation rather than the analysis of data and
producing more outputs in the form of tools, datasets and software rather than academic papers**.
Not everyone uses the specific term “technoscience”, but it seems to me that a lot of people are talking about the general concept. Eric Gilliam talks about “engineering-heavy research”. Anastasia Bektimirova and Chris Fellingham frame it as “computational skills and infrastructures”. It forms part of the story of a range of research organisations and groups that are seen as cool and interesting. Examples include Google DeepMind for example (the DeepMind Nobel citation focuses on AlphaFold2 as a tool), the Ellison Institute (“combining science, technology and commercial insight”, according to their mission statement). Or Ben Goldacre’s Bennett Institute of Applied Data Science describes itself as “a multidisciplinary team of software developers, clinicians and academic researchers. We produce academic research papers, but we also use the same skills to build live, interactive, data-driven tools and services”. Everyone is at it!
For the most part, the metascience papers I’ve read take it for granted that technoscience is a good thing that we need to have more of, and focus on the importance of funding new organisational forms in part as a way to deliver it. But as a funder, I found myself thinking “this is the really interesting bit! Talk more about technoscience!” The idea of technoscience - why it’s good, and if so how to get more of it - seems super-interesting, and worthy of analysis separate from questions about the institutional forms of research organisations.
So, a few questions:
Can we be more specific about why technoscience is good? Clearly greater technical skills and resources are useful, all else equal. But the idea of technoscience implies a change in the mix of inputs: spending more on technical skills and less on other things. What can we say about the ideal mix? How would we know? It also implies a change in the mix of outputs: more tools and datasets, fewer papers. Again, what do we think the ideal mix here is?
How different is a turn towards technoscience from general trends in research? I suspect almost all research leaders would say that technical skills and investment in things like software and data are important to their research centres. To what extent are advocates of technoscience calling for something materially different from this? Can we quantify this difference?
What is the relationship between technoscience on the one hand and organisational form and incentives on the other? Strong incentives on researchers to generate academic publications is clearly one mechanism militating against technoscience - if our goal is to produce outputs other than publications, then publish-or-perish culture is obviously a big barrier. To the extent publish-or-perish is intrinsic to university culture, then funding research organisations that aren’t universities might encourage technoscience. But what other incentives exist? Eric Gilliam’s paper on BBNs and the operating model of the Ellison Institute suggests that commercial incentives are conducive to technoscience. The TBI Lovelace Labs paper proposes labs that have long-term funding, perhaps implying that researchers with more freedom to follow their intrinsic incentives will be more open to technoscientific approaches. Assuming that technical resources are highly scalable, perhaps it’s simply about making big enough grants. What do we think is true?
More speculatively, is there a generational subtext to the debate on technoscience? In academia, like in a lot of fields, there’s a general tendency that people in the early stages of their career distinguish themselves by knowing the latest methodologies, while more senior people (with some exceptions) rely less on cutting-edge technical skills and more on crystallised intelligence and leadership. To what extent is “technoscience is good” a proxy for greater research leadership at earlier career stages?
What do advances in AI mean for the desirability of technoscience? The obvious answer is “more AI, more technoscience”. Google DeepMind is a technoscientific organisation based on its frontier AI skills and resources; the Ellison Institute sees AI as core to its aspirations. But there’s another aspect to this. AI changes the production possibilities in research, and specifically it is likely to make some sorts of technical activity massively more productive, perhaps meaning research organisations need to invest less in them rather than more. Andy Hall’s recent post on the 100x Research Institution gets at some of these ideas.
If technoscience is a good thing, to what extent should ESRC and similar organisations fund differently to enable it? As things stand, we spend significant amounts of money on social science data infrastructure, on skills training (for example our new research skills development hub), and on centres that enable longer term research. But is the mix right, and could our funding do a better job at delivering whatever the ideal mix of investments is, or creating the right incentives for researchers?
*Especially ones in the Dionysian tradition of metascience.
**Innovation studies fans will note that the idea of technoscience has gone on a long journey to take on this meaning - when it was popularised by people like Bruno Latour in the 1980s, technoscience was a descriptive - at times critical - description of the linked system of scientific research and technological development, rather than a normative term for a way of doing research.



One driver of the move towards technoscience is the reproducibility crisis (aka "credibility revolution"). In some fields this has been a questioning of many of the phenomena reported in papers (e.g. psychology), but in others it is less a concern about the reliability of reports, per se, and more a concern about the scaling / translation of phenomena "out of the lab" (e.g. computer science). The causes are myriad, but the focus on papers ("not research, just adverts for research") is certainly one, and the mitigation is more open research practices, and the sharing of the tools and processes which produce research findings, along with the findings. This is what looks a lot like your description of technoscience.
Lots more could be said, but I'll leave it at saying that funders could do a lot more to help research communities help themselves to produce research which is more reliable, more scalable and more translatable to different contexts.
This is an exceptionally thoughtful analysis of a crucial but under-examined question in metascience policy. You've identified something that many papers treat as a given—that "technoscience" is inherently good—and subjected it to the kind of rigorous interrogation it deserves.
Your question about the optimal mix of inputs and outputs is particularly sharp. The implicit assumption that more technical resources and tool-building is always better needs to be unpacked. What's the marginal value of the next dollar spent on software development versus traditional analysis? And critically, who decides what the "ideal mix" should be?
I'm also intrigued by your point about generational dynamics. There's definitely something to the idea that calls for "more technoscience" might partly reflect a desire for greater research leadership at earlier career stages, when technical skills are most current. This doesn't invalidate the argument, but it does suggest we should be explicit about whether we're arguing for a different way of doing research or a different distribution of power within research institutions.
The AI dimension you raise is fascinating—the possibility that AI could make certain technical activities so productive that we'd actually need less investment in them rather than more. This would completely flip the conventional wisdom. Worth watching closely.