Artwork

Content provided by Dennis Eckmeier, Science for Progress, Dennis Eckmeier, and Science for Progress. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Dennis Eckmeier, Science for Progress, Dennis Eckmeier, and Science for Progress or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

21 Altmetrics: A Better Way to Evaluate Research(ers)? – with Steffen Lemke

27:00
 
Share
 

Archived series ("Inactive feed" status)

When? This feed was archived on December 05, 2019 01:12 (4+ y ago). Last successful fetch was on October 21, 2019 04:22 (4+ y ago)

Why? Inactive feed status. Our servers were unable to retrieve a valid podcast feed for a sustained period.

What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.

Manage episode 228428462 series 2101974
Content provided by Dennis Eckmeier, Science for Progress, Dennis Eckmeier, and Science for Progress. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Dennis Eckmeier, Science for Progress, Dennis Eckmeier, and Science for Progress or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Who gets positions and funding in academia should depend on the merit of the researcher, project, or institute. But how do we assess these merits fairly, meaningfully and in a way that makes it comparable?

I talked about metrics with Steffen Lemke, PhD student at the Leibniz Information Centre for Economics (ZBW), in Kiel, Germany. He is part of the *metrics project, which investigates new research metrics and their applicability. The project is funded by the German Researcher Association, DFG.

Citation Based Metrics

In episode 9 I talked with Björn Brembs about the most prevalent metric used: the Journal Impact Factor. It turns out that the “JIF” is not a good metric.

Another commonly used metric is the “H-index”. Like JIF it is based on citations – the number of times a scientific paper was mentioned in another scientific paper. But it aims to measure the output of a researcher rather than the journal.

Both, H-index and JIF, have their own specific disadvantages. But they also share problems due to the source of data they use: citation indices. Citations are slow to accrue, which means it takes time to build a sufficient amount of data for proper evaluation. The indices are also incomplete and mostly locked behind paywalls. And finally, they are solely focused on journal articles.

But peer-reviewed research articles aren’t the only output scientists generate. Especially social sciences often publish in other formats, like books and monographs. STEM researchers, too, often create other outputs, such as designs for experimental setups, or code.

Finally, citation based metrics focus solely on the communication between scientists, not with the public.

Altmetrics

New, alternative metrics aim to change all that. “Altmetrics” is an umbrella term for a range of still experimental metrics. They use data one can find openly on the internet. This makes them fast and diverse. They look, for example, at the dissemination of research articles on social media. But they also look at the download numbers of open repositories for code, lecture videos, presentation slides, and other resources. In this way they may cover any research product you can find on the internet.

Whether a metric predicts a scientific impact (citations) fast and well, can be tested. So far it appears that data from online reference managers can predict citations well. You don’t need to wait for citing authors to write and publish their own papers, you just look if they bookmarked your paper for later use.

An obvious disadvantage of altmetrics is that they can be gamed. One can buy services from social media providers to advertise posts. Or one can use bots to amplify the impact on social media, or download files thousands of times.

Soberingly, researchers found altmetrics not to cover humanities and social sciences, sufficiently. Less than 12% of the research output from these fields showed up in the altmetrics tested.

Social Media Use

Steffen Lemke and his co-authors asked why there is so little representation of social sciences on social media. Surprisingly, while social scientists usually justify their work with the relevance for the public, they see interacting with it on social media as a waste of time. Some answered in the survey, that they would be overwhelmed by information. It was hard to tell the quality of information on the internet. Others say they’d not be seen as serious would they be caught using social media – even for work – by their supervisors.

Metric-Wiseness

In Steffen’s article you will find the interesting term “metric-wiseness”. Coined by a different research group, it describes the knowledge of researchers about metrics, and the ability to understand their meaning and applicability. In their research surveys, the *metrics project asks researchers about their knowledge of metrics.

Even very junior researchers know about JIF. And they try to optimize their research output to achieve publications in journals with a high JIF. However, there is little knowledge of how it is generated, and no awareness of the massive caveats this metric has. Similarly for the H-Index. Altmetrics, however, appear to be almost completely unknown. The whole concept appears to be alien to researchers.

The careers of researchers depend on metrics, and paying attention to measurements is their bread and butter. Still, after more than a decade as a researcher, myself, these findings are no surprise to me.

Conclusion

Altmetrics may be the path to better research evaluation in the future. They are fast and cover a larger portion of the overall output of researchers beyond scientific articles. But as all metrics, they can be gamed.

Once a metric becomes representative for productivity and impact, people will optimize their behavior for the metric. At the moment it appears to be a robust approach to use the bookmark data from reference managers to predict impact of articles within the scientific community. But once this has become a metric (and this is my own opinion), Elsevier, which owns the very popular reference manager “Mendeley”, will begin selling visibility for papers on that platform – and authors or their employers will buy it.

Overall, altmetrics are not ready to be universally applied. Many fields are insufficiently represented in the databases altmetrics rely on.

At the end, however, I think the most important part is to inform researchers about the metrics they are relying on.

Do you have questions, comments or suggestion? Email info@scienceforprogress.eu, write us on facebook or twitter, or leave us a video message on Skype for dennis.eckmeier.

Become a Patron!

Podchaser - Science for Societal Progress

sources:
*metrics website
Steffen Lemke’s profile at ZBW
Lemke et al, “When You Use Social Media You Are Not Working”: Barriers for the Use of Metrics in Social Sciences
Rousseau, S., and Rousseau, R., Being metric-wise: heterogeneity in bibliometric knowledge.
The Journal Impact Factor: how (not) to evaluate researchers – with Björn Brembs

  continue reading

38 episodes

Artwork
iconShare
 

Archived series ("Inactive feed" status)

When? This feed was archived on December 05, 2019 01:12 (4+ y ago). Last successful fetch was on October 21, 2019 04:22 (4+ y ago)

Why? Inactive feed status. Our servers were unable to retrieve a valid podcast feed for a sustained period.

What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.

Manage episode 228428462 series 2101974
Content provided by Dennis Eckmeier, Science for Progress, Dennis Eckmeier, and Science for Progress. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Dennis Eckmeier, Science for Progress, Dennis Eckmeier, and Science for Progress or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Who gets positions and funding in academia should depend on the merit of the researcher, project, or institute. But how do we assess these merits fairly, meaningfully and in a way that makes it comparable?

I talked about metrics with Steffen Lemke, PhD student at the Leibniz Information Centre for Economics (ZBW), in Kiel, Germany. He is part of the *metrics project, which investigates new research metrics and their applicability. The project is funded by the German Researcher Association, DFG.

Citation Based Metrics

In episode 9 I talked with Björn Brembs about the most prevalent metric used: the Journal Impact Factor. It turns out that the “JIF” is not a good metric.

Another commonly used metric is the “H-index”. Like JIF it is based on citations – the number of times a scientific paper was mentioned in another scientific paper. But it aims to measure the output of a researcher rather than the journal.

Both, H-index and JIF, have their own specific disadvantages. But they also share problems due to the source of data they use: citation indices. Citations are slow to accrue, which means it takes time to build a sufficient amount of data for proper evaluation. The indices are also incomplete and mostly locked behind paywalls. And finally, they are solely focused on journal articles.

But peer-reviewed research articles aren’t the only output scientists generate. Especially social sciences often publish in other formats, like books and monographs. STEM researchers, too, often create other outputs, such as designs for experimental setups, or code.

Finally, citation based metrics focus solely on the communication between scientists, not with the public.

Altmetrics

New, alternative metrics aim to change all that. “Altmetrics” is an umbrella term for a range of still experimental metrics. They use data one can find openly on the internet. This makes them fast and diverse. They look, for example, at the dissemination of research articles on social media. But they also look at the download numbers of open repositories for code, lecture videos, presentation slides, and other resources. In this way they may cover any research product you can find on the internet.

Whether a metric predicts a scientific impact (citations) fast and well, can be tested. So far it appears that data from online reference managers can predict citations well. You don’t need to wait for citing authors to write and publish their own papers, you just look if they bookmarked your paper for later use.

An obvious disadvantage of altmetrics is that they can be gamed. One can buy services from social media providers to advertise posts. Or one can use bots to amplify the impact on social media, or download files thousands of times.

Soberingly, researchers found altmetrics not to cover humanities and social sciences, sufficiently. Less than 12% of the research output from these fields showed up in the altmetrics tested.

Social Media Use

Steffen Lemke and his co-authors asked why there is so little representation of social sciences on social media. Surprisingly, while social scientists usually justify their work with the relevance for the public, they see interacting with it on social media as a waste of time. Some answered in the survey, that they would be overwhelmed by information. It was hard to tell the quality of information on the internet. Others say they’d not be seen as serious would they be caught using social media – even for work – by their supervisors.

Metric-Wiseness

In Steffen’s article you will find the interesting term “metric-wiseness”. Coined by a different research group, it describes the knowledge of researchers about metrics, and the ability to understand their meaning and applicability. In their research surveys, the *metrics project asks researchers about their knowledge of metrics.

Even very junior researchers know about JIF. And they try to optimize their research output to achieve publications in journals with a high JIF. However, there is little knowledge of how it is generated, and no awareness of the massive caveats this metric has. Similarly for the H-Index. Altmetrics, however, appear to be almost completely unknown. The whole concept appears to be alien to researchers.

The careers of researchers depend on metrics, and paying attention to measurements is their bread and butter. Still, after more than a decade as a researcher, myself, these findings are no surprise to me.

Conclusion

Altmetrics may be the path to better research evaluation in the future. They are fast and cover a larger portion of the overall output of researchers beyond scientific articles. But as all metrics, they can be gamed.

Once a metric becomes representative for productivity and impact, people will optimize their behavior for the metric. At the moment it appears to be a robust approach to use the bookmark data from reference managers to predict impact of articles within the scientific community. But once this has become a metric (and this is my own opinion), Elsevier, which owns the very popular reference manager “Mendeley”, will begin selling visibility for papers on that platform – and authors or their employers will buy it.

Overall, altmetrics are not ready to be universally applied. Many fields are insufficiently represented in the databases altmetrics rely on.

At the end, however, I think the most important part is to inform researchers about the metrics they are relying on.

Do you have questions, comments or suggestion? Email info@scienceforprogress.eu, write us on facebook or twitter, or leave us a video message on Skype for dennis.eckmeier.

Become a Patron!

Podchaser - Science for Societal Progress

sources:
*metrics website
Steffen Lemke’s profile at ZBW
Lemke et al, “When You Use Social Media You Are Not Working”: Barriers for the Use of Metrics in Social Sciences
Rousseau, S., and Rousseau, R., Being metric-wise: heterogeneity in bibliometric knowledge.
The Journal Impact Factor: how (not) to evaluate researchers – with Björn Brembs

  continue reading

38 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide