However, most of these metrics assume that all authors contribute equally when a paper has multiple authors.

Oh boy - that is seriously flawed. There are big differences in what type of people will wind up on the author lists (in some disciplines it's even different who is author and who is coauthor. In some disciplines the head of the department is always in the first author spot - whether he contributed or not. And it even differs from nation to nation what the customs on author/coauthorship are)
It's also nearly impossible to infer how much a coauthor contributed to a paper. In some cases it's substantive work, in others it's 'merely' testing or data collection.

The number of possible bias factors is huge. (not that I have a better idea. But such impact factor algorithms should always be taken with a grain - or better a lump - of salt)

"What is the best way to measure a researcher's scientific impact?"


Best way? There is only one way,,,,,, time. What science in the future will be constructed on it? There is no other metric for scientific impact.

This sounds like another "no more geniuses or Einsteins" sort of article.

here is only one way,,,,,, time.

While I agree that only time tells - that's probably not a good metric for filling positions now.
Waiting until someone is dead until you hire them tends not to do the job.

here is only one way,,,,,, time.

While I agree that only time tells - that's probably not a good metric for filling positions now.
Waiting until someone is dead until you hire them tends not to do the job.


He would give the administrators less problems, eh?

Naaa, my only point was that a "brilliant" up and coming person, oft times might become a dud. While every now and again, some truly new science comes from an unexpected quarter.

"Impacts" are by definition "after" the fact. Instead of "measure" maybe they should have used the word "predict"?

He would give the administrators less problems, eh?

I know a couple of administrators who would go for that line of reasoning ;)

Naaa, my only point was that a "brilliant" up and coming person, oft times might become a dud. While every now and again, some truly new science comes from an unexpected quarter.

Agreed. But I'd rather hire 10 billiant up-and-coming scientists and live with the one dud instead of hiring 10 from an unexpected quarter on the off chance one will turn out to be brilliant.

It's not perfect - but the numbers game favors the one with the track record (quite heavily in my experience). But mostly it's decided based on how they present themselves, their work and their planned work. Impact factor just gets you the invitation to the interview - not the job.

He actually is. I think his stance that science shouldn't be awarded with prizes is pretty cool.

But then again we live in a real world - and not all sciences can be done by pen and pencil (like his math). And with real world issues come real world problems. How do you choose the head of an institute? Would Perelman be the right man for the job? Despite his genius I'd say: No way.

Impact factors are important for the interface between scientists and the institutions where they have their jobs (be it in the industry or at universities).

AMONGST scientists (e.g. when they discuss their science at conferences) you will find that impact factors don't matter one bit (most scientists don't even KNOW their own impact factor).

I found that the most well known guy at a conference will happily discuss theories with the 'lowliest grad student' as readily as with one of his 'career peers'.

in reality, everything happens at the edge of the herd, and this website and anything on it would be aware of none of that.

You'd be surprised.
Some of us here are actually directly in touch with people on the very edge of our respective specialty.

Science isn't so mysterious. People in science are also just people. You can go and talk to them like you can go and talk to most anyone if you get up the nerve to actually do it.

Heck, I even got PMs from authors of papers I commented on, here, twice, asking for review in one case (which I couldn't because the paper was over my head) and discussing my comment on another occasion because it actually seemed relevant as a qualifier for the statement the paper seemed to make. On other occasions I asked the authors directly for the paper or discussed an idea based on their work with them.

And I'm sure others have had similar experiences here.

This isn't the edge, I agree, but the way to the very edge is just an email away.

What the scientific impact is supposed to mean? For example, the cold fusion finding is quite fundamental from human society perspective, but it never appeared in high impacted mainstream journals. Most of mainstream physics still denies it. The contemporary impact system is valued in the way, in which it contributes to subsequent occupation of scientists, not by its the contribution for the rest of human civilization. Such a criterions are not just harmful, but they represent the brake of the further progress. For example the scientists avoid research of new areas, because they have no one to cite here (no citation, no grants and salary). Aren't we paying the scientists just for original research instead?

What the scientific impact is supposed to mean? For example, the cold fusion finding is quite fundamental from human society perspective, but it never appeared in high impacted mainstream journals. Most of mainstream physics still denies it. The contemporary impact system is valued in the way, in which it contributes to subsequent occupation of scientists, not by its the contribution for the rest of human civilization.


Damn my eyes Zephyr, I agree with ya, the cold fusion guys haven't made an impact,,,, it seems the mainstream physics is the culprit, no one is writing papers on it,,,

We must pass a law: "Every 2nd Paper On The Nuclear Physics Must Be On The Cold Fusion For Ninety Years Now",,,,,,,, or for short we could just refer to it as the "Mainstream Fairness to Bunk Science Act"

I question the need for such a metric.