When the financial crisis hit in 2008 it became clear that the good times of the previous decade were not going to last. Researchers, who had gotten used to 10 years of steadily increasing budgets, immediately began making the case that investment in science and technology was the best way to ensure economic recovery.

In many countries, it worked. Despite plunging national revenues, governments around the world included extra funding for science and technology in their stimulus budgets.

But often the money came with strings attached.

In the United States, the huge $10 billion stimulus for the National Institutes of Health was mainly directed towards short-term projects that had a definite goal in mind. Speculative blue skies research was not welcome. Since the rules of the stimulus package meant the money had to be spent within two years, preference was given to projects that could start immediately, and deliver results quickly. Other research agencies, such as the National Science Foundation, also got a boost in funding from the stimulus, but with similar conditions.

The US government was serious about its demands for a quick return on its investment. Vice President Joe Biden has already written one report outlining what the stimulus funding for science has achieved so far, with more follow-up reports planned.

And a huge project, known as STAR METRICS (Science and Technology in America's Reinvestment – Measuring the EffecT of Research on Innovation, Competitiveness and Science) began last spring to develop ways to measure the outcomes of science investments and demonstrate the benefits of scientific investments to the public. Both the government and the research community have high hopes that it will help bring some clarity to the muddled field of research impacts.

Even countries that did not provide their researchers with a stimulus boost are pinning their hopes for economic recovery on science and technology, and reserving their funding for work that is likely to generate economic return.

Despite its shattered economy and humiliating EU and IMF bailout, Ireland prioritized science funding in its emergency budget late last year – it was just about the only area spared cuts, and in fact funding was actually increased, partly reversing the cuts of the year before. The country is trying to innovate its way out of recession.

Even Singapore, long the bastion of high salaries and lavish no-strings-attached funding, has started asking to see something in return for its generosity. In September, the government abruptly announced that almost one-third of the research budget will be shifted to competitive “industrial alignment funds”. To get their hands on the money, researchers will now have to show that their work has industrial applications. But the country where the ‘impact agenda’ has advanced the furthest, and faced perhaps the fiercest criticism, is the UK.

Several years ago, the seven research councils, which provide the majority of project-based funding in the UK, began asking applicants to include a short “impact statement” alongside their grant applications, setting out what the results might be used for in the future. The councils promised that the statements would not affect funding decisions – not even as tie-breakers between equally good proposals – they just wanted researchers to spend a bit more time thinking about the possible practical outcomes of their work.

The statements were unpopular, but with nothing riding on it researchers didn't take them too seriously – rumor has it many cell biologists simply put in a vague reference to cancer treatment and called it a day.

Then in 2009 the impact agenda got serious. The government announced that, starting in 2014, 25 % of quality-related research funding – the money universities are given as a block grant based on their performance in a periodic national assessment of research quality – would indeed be based on how well the university exploited its research. Impact will be assessed through specific case studies and an overarching description of how each department goes about exploiting the results of its research.

Researchers were generally unimpressed. They spent the next year taking every opportunity on offer to voice their opposition, but to no avail. This month the Higher Education Funding Council for England, the government agency responsible for running the assessment and distributing the funding, will announce its final plans for the 2014 assessment, and impact is almost certainly going to remain a part of it. The best opponents can probably hope for is that the weighting be reduced to 20 %.

But researchers should not be too concerned with the new emphasis on impact. Any university department worth its salt should be able to come up with examples of useful work it has done in the past, and put in place effective mechanisms for recognizing and exploiting such work in the future. And the new regime has the potential to shake up the system in interesting ways. A pilot of the impact assessment last year threw up some unexpected results – many of the most prestigious universities did much worse than expected, while the smaller, business-focused ones took the top spots.

In the end, the governments that provide funding for research are entitled to ask what they are getting out of the deal. They are looking for ways to rebuild their economies on a more solid foundation than financial services, and science and technology can provide the building blocks. As prosperity slowly returns, researchers can once again look forward to rising budgets – but they will have to get used to justifying the expense.

Read full text on ScienceDirect

DOI: 10.1016/S1369-7021(11)70098-0