Seminar Series on Economics of Science and Innovation - Material/Q&A -
Seminar 1: Organizational Design of Academic Laboratories and Conflict of Production of Science vs. Scientists
Presentation material
Presentation material
Seminar 2: Academic Entrepreneurship and Evolution of Academic Cooperation
Presentation material
Q&A
Effect of entrepreneurship
Q: Were you able to consider the overall perception of the researchers with regard to academic entrepreneurship? Perhaps although they were not actively entrepreneurs at the time, they had such plans?
A: To some extent. We have a questionnaire item asking about their business planning.
Q: Does difference in entrepreneurial activities participated affects the pattern in sharing? (For e.g., denial rate is different for those founding start-ups or marketing new tech)
A: We found that all types of entrepreneurial activities except one (at the individual level) have insignificant effects on resource sharing. Only one type that shows a significant effect is that one is engaged in a commercial activity and moreover his/her commercialized resource was requested for sharing.
Effect of patenting
Q1: I think commercial academics should share research information after they obtain patent to manage both open science and academic entrepreneurship. Is the reason why commercial academics are unwilling to share research information that patent system doesn't work well?
Q2: Our current society is changing faster than ever and surprisingly, a business model doesn’t have to be incompatible with “altruistic” purposes : for instance, I’m thinking about Tesla who opened up his patents.
A: When we talk about resource sharing within academia, patenting does not have to be incompatible with sharing. I suppose that most patentees would not try to charge users of their patents who use them for academic research. Nevertheless, there are some exceptions. Some patentees try to make profit out of academic use (see: http://www.wipo.int/wipo_magazine/en/2006/03/article_0006.html). Patented technologies may be valuable of itself or may be forbidden to be shared for some contracts. For example, Walsh et al (2007) find that drug candidate compounds are less likely to be shared. Our study cannot distinguish the nature of patented resources, but we observe that patented technologies are indeed less likely to be shared.
Q: Could something like case dependent licensing be a solution to this problem? In a way that researchers can get a free license for using the research of other groups and in case they use the results for setting up a business would they have to pay a certain predetermined fee.
A: There is a type of licensing called “reach through,” where patent users have to pay only when patent use results in some commercial profit. See http://www.nature.com/bioent/2003/030101/full/nbt0902-945.html. I don’t think this is really a popular solution in academia because patentees can claim too broad rights.
Exchange mechanism
Q: How about building the 'open science' community on the model beneficial for both scientist and easy accessible for public? e.g. Spotify? ( in relation how Spotify - allow access to 'open music' or very reduced cost)
A: Science communities have set up repositories (e.g. Jackson Lab). That is, donor scientists can store their resources there for sharing, and recipients make sharing requests for the repositories, and then, the repositories provide the resources at cost. Recipients are usually required to acknowledge the donors in their publication. We found that this mechanism was not the major route for sharing in Japan, though Japan has many repositories.
Country specificity
Q: Based on a general understanding of differences in academic worlds all around the world, how do you expect Japan to differ from other countries and academic worlds?
A: There can be many possible reasons that Japan may differ from other countries. One thing I think may play a more significant role in Japan is social network between academics. We found that researchers very likely to comply with requests when the requests were made by someone that they knew (in other words, requests from strangers were likely to denied). I think that cooperation based on existing network is universal, but I suspect that not cooperating with strangers might be more prominent in the Japanese (or perhaps Asian) culture compared to Western culture.
Punishment to non-cooperation
Q: Are there countries in which denying requested research materials is illegal (or even just universities in which it's forbidden)? If yes, has anyone studied their research productivity?
A: I don’t think so. But, recommendation from National Academies of Science, for example, says that if a request is denied, one can report the incident to the journal where the requested resource appears, denier’s university, or denier’s funder.
Regulation for entrepreneurial activities
Q: Is there no regulation regarding students or researchers who engaged in entrepreneurial activity? For example giving limitation of working hours related to entrepreneurial activity.
A: Yes, there is. See this report: http://www.nistep.go.jp/achiev/ftx/jpn/dis066j/idx066j.html. By the way, it’s a different issue if academics know the existence of regulations or the contents of the regulations.
Sampling
Q: How did you choose the survey respondents? What percentage of the life science community in Japan would you say this study has captured?
A: The sample is randomly chosen from active researchers who received national funding and are affiliated to research-oriented universities. The sampling frame covered all fundees in most research-intensive universities, but we do not include teaching staff and education-oriented universities.
Q: It is assumed that the norms for practical contribution and open science differ by scientific fields, so why did you choose the field of life science and material science in this research?
A: We chose "resource sharing" as a measurement of open science because it can be clearly defined. Then, we chose life sciences and material sciences as our fields because resource sharing is known to be active.
Math model (in Shibayama (2015) JEE)
Q: As the formula (2a), I think "b g1 x1" is right. Why g1 is not necessary?
A: The "b x1" is the benefit when an ALLC receives cooperation from another ALLC. The assumption is that ALLC donors always help others regardless of recipient's reputation. Therefore, g1 is not needed.
Presentation material
Q&A
Effect of entrepreneurship
Q: Were you able to consider the overall perception of the researchers with regard to academic entrepreneurship? Perhaps although they were not actively entrepreneurs at the time, they had such plans?
A: To some extent. We have a questionnaire item asking about their business planning.
Q: Does difference in entrepreneurial activities participated affects the pattern in sharing? (For e.g., denial rate is different for those founding start-ups or marketing new tech)
A: We found that all types of entrepreneurial activities except one (at the individual level) have insignificant effects on resource sharing. Only one type that shows a significant effect is that one is engaged in a commercial activity and moreover his/her commercialized resource was requested for sharing.
Effect of patenting
Q1: I think commercial academics should share research information after they obtain patent to manage both open science and academic entrepreneurship. Is the reason why commercial academics are unwilling to share research information that patent system doesn't work well?
Q2: Our current society is changing faster than ever and surprisingly, a business model doesn’t have to be incompatible with “altruistic” purposes : for instance, I’m thinking about Tesla who opened up his patents.
A: When we talk about resource sharing within academia, patenting does not have to be incompatible with sharing. I suppose that most patentees would not try to charge users of their patents who use them for academic research. Nevertheless, there are some exceptions. Some patentees try to make profit out of academic use (see: http://www.wipo.int/wipo_magazine/en/2006/03/article_0006.html). Patented technologies may be valuable of itself or may be forbidden to be shared for some contracts. For example, Walsh et al (2007) find that drug candidate compounds are less likely to be shared. Our study cannot distinguish the nature of patented resources, but we observe that patented technologies are indeed less likely to be shared.
Q: Could something like case dependent licensing be a solution to this problem? In a way that researchers can get a free license for using the research of other groups and in case they use the results for setting up a business would they have to pay a certain predetermined fee.
A: There is a type of licensing called “reach through,” where patent users have to pay only when patent use results in some commercial profit. See http://www.nature.com/bioent/2003/030101/full/nbt0902-945.html. I don’t think this is really a popular solution in academia because patentees can claim too broad rights.
Exchange mechanism
Q: How about building the 'open science' community on the model beneficial for both scientist and easy accessible for public? e.g. Spotify? ( in relation how Spotify - allow access to 'open music' or very reduced cost)
A: Science communities have set up repositories (e.g. Jackson Lab). That is, donor scientists can store their resources there for sharing, and recipients make sharing requests for the repositories, and then, the repositories provide the resources at cost. Recipients are usually required to acknowledge the donors in their publication. We found that this mechanism was not the major route for sharing in Japan, though Japan has many repositories.
Country specificity
Q: Based on a general understanding of differences in academic worlds all around the world, how do you expect Japan to differ from other countries and academic worlds?
A: There can be many possible reasons that Japan may differ from other countries. One thing I think may play a more significant role in Japan is social network between academics. We found that researchers very likely to comply with requests when the requests were made by someone that they knew (in other words, requests from strangers were likely to denied). I think that cooperation based on existing network is universal, but I suspect that not cooperating with strangers might be more prominent in the Japanese (or perhaps Asian) culture compared to Western culture.
Punishment to non-cooperation
Q: Are there countries in which denying requested research materials is illegal (or even just universities in which it's forbidden)? If yes, has anyone studied their research productivity?
A: I don’t think so. But, recommendation from National Academies of Science, for example, says that if a request is denied, one can report the incident to the journal where the requested resource appears, denier’s university, or denier’s funder.
Regulation for entrepreneurial activities
Q: Is there no regulation regarding students or researchers who engaged in entrepreneurial activity? For example giving limitation of working hours related to entrepreneurial activity.
A: Yes, there is. See this report: http://www.nistep.go.jp/achiev/ftx/jpn/dis066j/idx066j.html. By the way, it’s a different issue if academics know the existence of regulations or the contents of the regulations.
Sampling
Q: How did you choose the survey respondents? What percentage of the life science community in Japan would you say this study has captured?
A: The sample is randomly chosen from active researchers who received national funding and are affiliated to research-oriented universities. The sampling frame covered all fundees in most research-intensive universities, but we do not include teaching staff and education-oriented universities.
Q: It is assumed that the norms for practical contribution and open science differ by scientific fields, so why did you choose the field of life science and material science in this research?
A: We chose "resource sharing" as a measurement of open science because it can be clearly defined. Then, we chose life sciences and material sciences as our fields because resource sharing is known to be active.
Math model (in Shibayama (2015) JEE)
Q: As the formula (2a), I think "b g1 x1" is right. Why g1 is not necessary?
A: The "b x1" is the benefit when an ALLC receives cooperation from another ALLC. The assumption is that ALLC donors always help others regardless of recipient's reputation. Therefore, g1 is not needed.
Seminar 3: Scientific Integrity: Recent Empirical Findings on Scientists' Questionable Practices
Presentation material
Q&A
Peer review
Q: It is suggested that “Academic associations could help by officially condemning the practice.” If that solution could eliminate the coercive motive, why didn’t some academic associations already do it? (the sentence implies no one did it so far, isn’t it ?)
A: The coercive citations are coerced by editors of rather important journals. That is, scientists who are powerful in academic communities are the very people who coerce citations. Thus, it would be difficult to solve the problem within the community.
Q: Do you think the score given by a peer-reviewer to a paper is influenced by his work being cited by that paper? That is, a reviewer might give a better score to a paper (undeserving) if his work is cited by that paper. If so, how do we measure this phenomenon?
A: I think that citation given to referees have a positive effect (at least does not have a negative effect) on peer review result, if the way it is cited is friendly. You can evaluate such an effect by comparing two or more referees of a single paper, one cited and the other not cited.
Q: In peer review, how many reviewer is needed? What if one reviewer give substantially different evaluation and comment?
A: We usually have more than one reviewers, and it is often the case that their comments are contradictory. Authors have to choose who to follow, and then, it is editors who decide if authors' decision is right or wrong.
Q: What kind of effort can researchers make to convince such stubborn referees?
A: Theoretically, it is not that authors have to convince referees, but that authors have to convince the editor that referee is wrong. If an editor is a reasonable person and referee is not, the chance of authors winning the debate is not slim.
Q: Open peer review vs. anonymous peer review in the context of the seminar ?
A: There is a survey evaluating open PR in contrast to traditional PR. The result suggested "open peer review was not widely popular, either among authors or by scientists invited to comment" and “there is a marked reluctance among researchers to offer open comments" (http://www.nature.com/nature/peerreview/debate/).
Preventive measures
Q: If the criteria of certifying the position or title of a scientist is depending on quality or influence instead of on quantity, will it help? since junior researchers as well as associated professors are facing the pressure of the numbers of paper that they have to publish.
A: I think it will decrease certain types of misbehavior such as plagiarism and duplicate pub. In this sense, I think it is a good idea. But, it won't be effective for other types of misbehavior such as fabrication or falsification. Authors would have an even stronger incentive to cook up "interesting result."
Q: There's considerable number of misconducts. How can we reduce it, if possible, outside of more replication study?
A: See p.52.
Q: What are the risk trade-offs a scientist is facing while adopting one of these bad behaviors? Are there any studies about the consequences for who gets caught?
A: Not that I know of. I suppose that they are usually unable to continue working in academia, if misconduct is officially concluded.
Q: Does Japan, have their equivalent to UK Research Integrity Office?
A: No, it does not, though Science Council of Japan advised to have one. Maybe, funding agencies are becoming more responsible for overseeing misconduct.
Commercialization effect
Q: Do you think this study could have any relationship to your other study on commercialization of certain fields? Perhaps researchers who already have good income from being entrepreneurs or from private companies are less pressured to be published?
Q: Do you think there is any correlation between the academics/authors that are entrepreneurially active (discussed at seminar 2) and research dishonesty (those who are less likely to follow referees)? I.e. do you think that authors that are more entrepreneurially active are more or less (or cannot say) likely to follow referees?
A: Theoretically, all the problems partially stem from the deviation from the traditional norm. Commercialization could have the kind of effect you mentioned, but it could be negatively correlated with traditional norm. Thus, the total effect is a matter of empirical question. When I tested if the involvement in commercial activity has a effect, I didn’t see a significant effect.
Cultural difference
Q: Do you expect different results in different countries? I can imagine that there is a higher tendency to dishonest conformity in Japan since people usually don’t want to stand out in Japan.
Q: Do you expect different results in different countries? I can imagine that there is a higher tendency to dishonest conformity in Japan since people usually don’t want to stand out in Japan.
A: Yes. I think country and culture matters. The observed effect of foreign stay suggests that authors less often follow referees in the US (given that foreign experience for Japanese academics mostly occurs in the US). Seniority may not be an issue since authors do not know who the referee is. Pro-conformity culture (or, tendency to avoid conflict) may be peculiar in Japan, which could explain the result. Generally speaking about misconduct, incidents are different largely by country; when the standard of science is under development and system to maintain integrity is weak, misconduct is more likely to occur.
Seniority effect
Q: Could the seniority factor result also be interpreted as a result of a generation shift and a new culture among younger professors? I.e. could it be that associate professors will not be more honest when/if they become full professors since their behavior is incorporated in the new culture, which might be hard to change?
A: There are a few plausible interpretations behind the result of associate professors being more likely to follow referees. It may be that associate professors are less experienced so they cannot skillfully convince the referee/editor. If this is the major reason, they will become less likely to follow referees as their experience accumulates. A different story is that they turned strategic under the competitive pressure. Then, they may not recover their sense of integrity only because they are released from competition. We observe "repeat offenders" who are already in a secure position.
Honest error
Q: When calculating wrong-doings in terms of data , what can be said about mistakes ‘ genuine data misinterpretation or wrong conclusions, Isn't that equally problematic ?
A: No, they are not. Of course, scientists should be careful enough not to make errors, but even so, humans commit errors. An argument about retraction is that we should not consider all retractions as evil. Now that we know that most retractions are indeed evil, then authors who made errors would be reluctant to confess it, which is bad for the community.
Time Trend
Q: In what fields it is expected that number of misconducts committed will be increase like medical researches in the 2nd paper? And why?
A: See p31. I guess the difference is attributed to the feasibility of checking reproducibility, extent of competition, etc.
Various type of Questionable Practices
Q: How much does the percentage of test subject who conducted a specific bad behaviour correlate with the tet subjects of other behaviours? I.e. is it one and the same group who conduct all the 'crimes', or ar there many people also conducting only one or two types of bad behaviour?
A: I haven't seen any report referring to any correlation. My guess is that there might be some correlation if the fundamental mechanism is the deviation from the honesty norm. In particular, similar QPs may be repeated by the same person. As to misconduct, those who do plagiarism may be different from those who fabricate, because needed techniques are different.
Survey method
Q: The author states that there exist a large discrepancy between what researchers are willing to do and what they admit in doing in the survey. Does this finding not underline the inaccuracy of the meta-analysis and dismiss the results as insignificant?
A: In general, the survey response is subjective therefore including errors. But, including error does not destroy everything. If error is completely random, it is usually not a problem unless the magnitude of error is too great. If error is systematic, you need to be careful. In the focal study, I think the error occurs systematically in that respondents tend to under-report the incidents on misconduct or QP. Thus, their findings are likely to be conservative; the incidents of fraud are probably estimated to be smaller than the reality.
Q: Can you give us any insight on how the initial interview of the 21 professors helped you formulate your survey?
A: Generally, we start with interviews and small scale survey before doing a large survey. This is 1) to understand detailed mechanism behind observed problems and to explore interesting research questions, and 2) to confirm the survey works fine. In the focal study, I had not decided to investigate peer review process in detail before the interview. I did interviews with a broad goal of investigating scientists' strategic behavior in publication. Thus, the interview helped us find a specific scene to be examined and understand detailed questions to be asked.
Q: Since the research is conducted from the authors’ perspective, there might be some subjective factors in their answers. Is there any study on peer review from the referees’ point of view?
A: As to the conflict with authors, I haven't read a paper focusing on referee's perspecitve. My understanding is that this paper is the first to empirically examine peer-review conflict. However, there are studies examining how peer review influences the quality of resulting publication from the referee's perspective. They typically counted length of referee report and see if that is correlated with resulting publication's quality.
Presentation material
Q&A
Peer review
Q: It is suggested that “Academic associations could help by officially condemning the practice.” If that solution could eliminate the coercive motive, why didn’t some academic associations already do it? (the sentence implies no one did it so far, isn’t it ?)
A: The coercive citations are coerced by editors of rather important journals. That is, scientists who are powerful in academic communities are the very people who coerce citations. Thus, it would be difficult to solve the problem within the community.
Q: Do you think the score given by a peer-reviewer to a paper is influenced by his work being cited by that paper? That is, a reviewer might give a better score to a paper (undeserving) if his work is cited by that paper. If so, how do we measure this phenomenon?
A: I think that citation given to referees have a positive effect (at least does not have a negative effect) on peer review result, if the way it is cited is friendly. You can evaluate such an effect by comparing two or more referees of a single paper, one cited and the other not cited.
Q: In peer review, how many reviewer is needed? What if one reviewer give substantially different evaluation and comment?
A: We usually have more than one reviewers, and it is often the case that their comments are contradictory. Authors have to choose who to follow, and then, it is editors who decide if authors' decision is right or wrong.
Q: What kind of effort can researchers make to convince such stubborn referees?
A: Theoretically, it is not that authors have to convince referees, but that authors have to convince the editor that referee is wrong. If an editor is a reasonable person and referee is not, the chance of authors winning the debate is not slim.
Q: Open peer review vs. anonymous peer review in the context of the seminar ?
A: There is a survey evaluating open PR in contrast to traditional PR. The result suggested "open peer review was not widely popular, either among authors or by scientists invited to comment" and “there is a marked reluctance among researchers to offer open comments" (http://www.nature.com/nature/peerreview/debate/).
Preventive measures
Q: If the criteria of certifying the position or title of a scientist is depending on quality or influence instead of on quantity, will it help? since junior researchers as well as associated professors are facing the pressure of the numbers of paper that they have to publish.
A: I think it will decrease certain types of misbehavior such as plagiarism and duplicate pub. In this sense, I think it is a good idea. But, it won't be effective for other types of misbehavior such as fabrication or falsification. Authors would have an even stronger incentive to cook up "interesting result."
Q: There's considerable number of misconducts. How can we reduce it, if possible, outside of more replication study?
A: See p.52.
Q: What are the risk trade-offs a scientist is facing while adopting one of these bad behaviors? Are there any studies about the consequences for who gets caught?
A: Not that I know of. I suppose that they are usually unable to continue working in academia, if misconduct is officially concluded.
Q: Does Japan, have their equivalent to UK Research Integrity Office?
A: No, it does not, though Science Council of Japan advised to have one. Maybe, funding agencies are becoming more responsible for overseeing misconduct.
Commercialization effect
Q: Do you think this study could have any relationship to your other study on commercialization of certain fields? Perhaps researchers who already have good income from being entrepreneurs or from private companies are less pressured to be published?
Q: Do you think there is any correlation between the academics/authors that are entrepreneurially active (discussed at seminar 2) and research dishonesty (those who are less likely to follow referees)? I.e. do you think that authors that are more entrepreneurially active are more or less (or cannot say) likely to follow referees?
A: Theoretically, all the problems partially stem from the deviation from the traditional norm. Commercialization could have the kind of effect you mentioned, but it could be negatively correlated with traditional norm. Thus, the total effect is a matter of empirical question. When I tested if the involvement in commercial activity has a effect, I didn’t see a significant effect.
Cultural difference
Q: Do you expect different results in different countries? I can imagine that there is a higher tendency to dishonest conformity in Japan since people usually don’t want to stand out in Japan.
Q: Do you expect different results in different countries? I can imagine that there is a higher tendency to dishonest conformity in Japan since people usually don’t want to stand out in Japan.
A: Yes. I think country and culture matters. The observed effect of foreign stay suggests that authors less often follow referees in the US (given that foreign experience for Japanese academics mostly occurs in the US). Seniority may not be an issue since authors do not know who the referee is. Pro-conformity culture (or, tendency to avoid conflict) may be peculiar in Japan, which could explain the result. Generally speaking about misconduct, incidents are different largely by country; when the standard of science is under development and system to maintain integrity is weak, misconduct is more likely to occur.
Seniority effect
Q: Could the seniority factor result also be interpreted as a result of a generation shift and a new culture among younger professors? I.e. could it be that associate professors will not be more honest when/if they become full professors since their behavior is incorporated in the new culture, which might be hard to change?
A: There are a few plausible interpretations behind the result of associate professors being more likely to follow referees. It may be that associate professors are less experienced so they cannot skillfully convince the referee/editor. If this is the major reason, they will become less likely to follow referees as their experience accumulates. A different story is that they turned strategic under the competitive pressure. Then, they may not recover their sense of integrity only because they are released from competition. We observe "repeat offenders" who are already in a secure position.
Honest error
Q: When calculating wrong-doings in terms of data , what can be said about mistakes ‘ genuine data misinterpretation or wrong conclusions, Isn't that equally problematic ?
A: No, they are not. Of course, scientists should be careful enough not to make errors, but even so, humans commit errors. An argument about retraction is that we should not consider all retractions as evil. Now that we know that most retractions are indeed evil, then authors who made errors would be reluctant to confess it, which is bad for the community.
Time Trend
Q: In what fields it is expected that number of misconducts committed will be increase like medical researches in the 2nd paper? And why?
A: See p31. I guess the difference is attributed to the feasibility of checking reproducibility, extent of competition, etc.
Various type of Questionable Practices
Q: How much does the percentage of test subject who conducted a specific bad behaviour correlate with the tet subjects of other behaviours? I.e. is it one and the same group who conduct all the 'crimes', or ar there many people also conducting only one or two types of bad behaviour?
A: I haven't seen any report referring to any correlation. My guess is that there might be some correlation if the fundamental mechanism is the deviation from the honesty norm. In particular, similar QPs may be repeated by the same person. As to misconduct, those who do plagiarism may be different from those who fabricate, because needed techniques are different.
Survey method
Q: The author states that there exist a large discrepancy between what researchers are willing to do and what they admit in doing in the survey. Does this finding not underline the inaccuracy of the meta-analysis and dismiss the results as insignificant?
A: In general, the survey response is subjective therefore including errors. But, including error does not destroy everything. If error is completely random, it is usually not a problem unless the magnitude of error is too great. If error is systematic, you need to be careful. In the focal study, I think the error occurs systematically in that respondents tend to under-report the incidents on misconduct or QP. Thus, their findings are likely to be conservative; the incidents of fraud are probably estimated to be smaller than the reality.
Q: Can you give us any insight on how the initial interview of the 21 professors helped you formulate your survey?
A: Generally, we start with interviews and small scale survey before doing a large survey. This is 1) to understand detailed mechanism behind observed problems and to explore interesting research questions, and 2) to confirm the survey works fine. In the focal study, I had not decided to investigate peer review process in detail before the interview. I did interviews with a broad goal of investigating scientists' strategic behavior in publication. Thus, the interview helped us find a specific scene to be examined and understand detailed questions to be asked.
Q: Since the research is conducted from the authors’ perspective, there might be some subjective factors in their answers. Is there any study on peer review from the referees’ point of view?
A: As to the conflict with authors, I haven't read a paper focusing on referee's perspecitve. My understanding is that this paper is the first to empirically examine peer-review conflict. However, there are studies examining how peer review influences the quality of resulting publication from the referee's perspective. They typically counted length of referee report and see if that is correlated with resulting publication's quality.
Seminar 4: Evaluation of Science, Resource Allocation, and Scientific Performance
Presentation material
Q&A
Evaluation standard: quality vs. quantity
Q: Wouldn’t it be better for policy makers to focus on the number of total citations of a certain group than just on the pure number of publications and the impact of the scientific magazine to do resource allocation?
A: I agree, but there are two concerns. 1) It can also cause undesirable behavior of scientists (including misconduct). 2) There is no perfect way to measure "impact". Impact Factor is used very frequently, but it does not mean that scientists are happy about it.
Q: While a publication amount based funding allocation system produces a reduction in publication quality, wouldn't a citation based method cause coercive citation? And what could be a different strategy for reasonable funding allocation?
A: You are right. Quality-based evaluation can induce various problematic behaviors. Coercive citation is just one example. We can employ peer-review to judge the "quality" of research without basing on metrics. This is costly, but we can do this for a limited number of relatively big grants. Another problem here is peer review can be compromised by social ties between reviewers/applicants, especially in small communities as in Japan.
Evaluation: new trend
Q: Have there ever been discussions about or implementations of different methods of sharing academic research knowledge to the world in an ethical and scientifically justified way
A: See p.54.
Evaluation: Interdisciplinarity
Q: Apparently, ranking of a publication must always be done in relation to its context, defined by peers and scientific fields. However, peers and fields raise artificial boundaries and translation barriers (by employing a very specific language, for example) to create domains of discursive hegemony and evade competition. From this perspective, the differentiation of scientific disciplines can be viewed as the creation of various scientific cartels/rent-seeking coalitions that have radically reduced the size of science market. The impact of a publication should be assessed in relation to this differentiation/fragmentization, asking how far its impact has stretched across the fiefdoms of self-contented peer groups and scientific fields. I wonder if there are ranking methods, which employ sophisticated network analyses in a first step to identify these entrenched fiefdoms in science, and in a second step project the citation network of a paper on these analyses, allowing it to assess if it is merely reproducing entrenched clusters, or if it is transcending and transforming them.
A: For technical reference, see, for example, Foster, J. G., Rzhetsky, A., Evans, J. A. 2015. Tradition and innovation in scientists’ research strategies. American Sociological Review, 80: 875-908. In terms of practice, the competition between fields occurs at a much slower pace than at the individual level. For example, KAKEN/GiA field definitions are reviewed continually, which could potentially eliminate weak fields. Evaluation of interdisciplinarity is another issue. Some funding schemes emphasize interdisciplinarity. For example, my impression is that European Research Council grants emphasize interdisciplinarity.
Resource allocation equality
Q: Is there concern that more equal distribution of budget would reduce competition and result in compromising scientific impact?
A: If budget is distributed completely equally, it would deteriorate productivity. I think that budget distribution needs to be skewed somehow according to ability/performance of individuals/universities. My impression is that the current distribution is too skewed or the way budget is distributed is not really based on performance. That is why we observe some counter-productive pub behavior for those who received large grants.
Resource allocation and fixed cost
Q: Quote from paper - “Large facilities entail operating cost in the long term and it may be difficult for labs to downsize" - Can sharing of resources by labs be a solution to this? Labs which are similar may require similar capital-intensive equipments, etc.. Do you think sharing of resources (equipments, lab trainers, etc.) among similar labs will cut down some of the operating cost and help labs in maintaining their level of scientific productivity even when funding comes down by a bit?
A: I agree. Reducing fixed cost or spreading fixed cost at the university/national level facilitates dynamics. Indeed, one of the governments' initiatives follows this direction.
Resource allocation: field difference
Q: How do you think about the difference of funding distribution between natural science and social science? is it because of the different requirements of necessary resources for different subjects? for example the natural science needs more facilities to accomplish the experiment.
A: You are right. Some fields technically need large budget, which accounts for the observed difference in funding distribution. However, even if large facility cost is removed, I still might see some difference because of the initial start-up cost difference. That is, in some fields (like natural sciences), producing the first result may take a few years, while in others (like social sciences), the initial result may come out in some months. In the former case, funding allocation becomes less dynamic, which could increase the inequality of funding allocation.
Q: Should we encourage PIs of too large laboratories in some way like changing metrics? Do you think the large laboratories still play an important role in the community or not?
A: We found that labs with too many members compromise pub quality and education quality. Thus, too big labs should be avoided without reasonable justification. Some types of research require a large team (like genome project, etc.), where large labs may make sense. Such special projects may better be managed separately from ordinary university labs, since they take a different set of managerial skills.
Open Access Journals
Q: Obviously, the author is purposely provocative and want to condemn fee-based open access journals. To put it in different words, he is trying to “scam” them and it worked pretty well.
However, can’t we assume that the huge majority of scientists wouldn’t try to fake their results, and have a certain self-esteem of themselves and hence, are diligently working for the sake of science and not meaninglessly? Dishonest people will always exist in a very small proportion anyway, but it doesn’t mean we should reconsider those fee-based open access journals.
A: You are right. I think that the predatory open-access business is not sustainable after all. Through rumors, we will notice that some journals are not reliable and stop submitting a paper there, and predatory journals will be eliminated eventually. There are indeed reputed open access journals, like PLoS One. The survey is nevertheless valuable in that it proves that quite a large proportion of OA journals are indeed unreliable.
Q: In the article both the notion of a wild wild west and of a trust based environment are mentioned. Have there been similar studies like the mentioned paper and can we see a trend from trust based towards wild wild west, or (likely) is the situation more complicated, but can we still distinguish tendencies toward a certain type of environment and/or behaviour?
A: As the academic sector is becoming larger and more commercialized, it invites various types of predatory businesses. I assume not many of them are sustainable. Scientists can tell fraudulent scientists if they really want to judge it. However, if mutual monitoring does not work well (like countries where the standard of science is still premature), the predatory business may sustain.
Method (Shibayama 2015)
Q: Among 900 PI as data sample, only 377 PI responded to the examination (44% response rate). How do we believe that result based on those sample represent the real situation? People who didn't respond may be the people who do not care about the quality of their publication.
A: Your point is absolutely right. We compared the difference in observable characteristics between respondents and non-respondents and confirm no meaningful difference between them. In this study, the survey did not focus on publication behavior, so I wouldn't be worried about a bias in this regard. By the way, 44% response is quite good when we do social science research.
Methods (Fanelli 2010)
Q: Papers published between 2000 and 2007 were sampled. When the number of papers retrieved from one discipline exceeded 150, papers were selected using a random number generator. Why the need for a Cota of 150 papers from one discipline before using a random method in selecting our sample papers? Why you didn’t use the random sample method from the beginning?
A: It is called stratified sampling. If random-sampling is used for the whole sample, some disciplines could have too few papers. The study wants to have many enough papers in each discipline to evaluate the field effect reliably. The reason for having threshold of 150 is to reduce their workload.
Presentation material
Q&A
Evaluation standard: quality vs. quantity
Q: Wouldn’t it be better for policy makers to focus on the number of total citations of a certain group than just on the pure number of publications and the impact of the scientific magazine to do resource allocation?
A: I agree, but there are two concerns. 1) It can also cause undesirable behavior of scientists (including misconduct). 2) There is no perfect way to measure "impact". Impact Factor is used very frequently, but it does not mean that scientists are happy about it.
Q: While a publication amount based funding allocation system produces a reduction in publication quality, wouldn't a citation based method cause coercive citation? And what could be a different strategy for reasonable funding allocation?
A: You are right. Quality-based evaluation can induce various problematic behaviors. Coercive citation is just one example. We can employ peer-review to judge the "quality" of research without basing on metrics. This is costly, but we can do this for a limited number of relatively big grants. Another problem here is peer review can be compromised by social ties between reviewers/applicants, especially in small communities as in Japan.
Evaluation: new trend
Q: Have there ever been discussions about or implementations of different methods of sharing academic research knowledge to the world in an ethical and scientifically justified way
A: See p.54.
Evaluation: Interdisciplinarity
Q: Apparently, ranking of a publication must always be done in relation to its context, defined by peers and scientific fields. However, peers and fields raise artificial boundaries and translation barriers (by employing a very specific language, for example) to create domains of discursive hegemony and evade competition. From this perspective, the differentiation of scientific disciplines can be viewed as the creation of various scientific cartels/rent-seeking coalitions that have radically reduced the size of science market. The impact of a publication should be assessed in relation to this differentiation/fragmentization, asking how far its impact has stretched across the fiefdoms of self-contented peer groups and scientific fields. I wonder if there are ranking methods, which employ sophisticated network analyses in a first step to identify these entrenched fiefdoms in science, and in a second step project the citation network of a paper on these analyses, allowing it to assess if it is merely reproducing entrenched clusters, or if it is transcending and transforming them.
A: For technical reference, see, for example, Foster, J. G., Rzhetsky, A., Evans, J. A. 2015. Tradition and innovation in scientists’ research strategies. American Sociological Review, 80: 875-908. In terms of practice, the competition between fields occurs at a much slower pace than at the individual level. For example, KAKEN/GiA field definitions are reviewed continually, which could potentially eliminate weak fields. Evaluation of interdisciplinarity is another issue. Some funding schemes emphasize interdisciplinarity. For example, my impression is that European Research Council grants emphasize interdisciplinarity.
Resource allocation equality
Q: Is there concern that more equal distribution of budget would reduce competition and result in compromising scientific impact?
A: If budget is distributed completely equally, it would deteriorate productivity. I think that budget distribution needs to be skewed somehow according to ability/performance of individuals/universities. My impression is that the current distribution is too skewed or the way budget is distributed is not really based on performance. That is why we observe some counter-productive pub behavior for those who received large grants.
Resource allocation and fixed cost
Q: Quote from paper - “Large facilities entail operating cost in the long term and it may be difficult for labs to downsize" - Can sharing of resources by labs be a solution to this? Labs which are similar may require similar capital-intensive equipments, etc.. Do you think sharing of resources (equipments, lab trainers, etc.) among similar labs will cut down some of the operating cost and help labs in maintaining their level of scientific productivity even when funding comes down by a bit?
A: I agree. Reducing fixed cost or spreading fixed cost at the university/national level facilitates dynamics. Indeed, one of the governments' initiatives follows this direction.
Resource allocation: field difference
Q: How do you think about the difference of funding distribution between natural science and social science? is it because of the different requirements of necessary resources for different subjects? for example the natural science needs more facilities to accomplish the experiment.
A: You are right. Some fields technically need large budget, which accounts for the observed difference in funding distribution. However, even if large facility cost is removed, I still might see some difference because of the initial start-up cost difference. That is, in some fields (like natural sciences), producing the first result may take a few years, while in others (like social sciences), the initial result may come out in some months. In the former case, funding allocation becomes less dynamic, which could increase the inequality of funding allocation.
Q: Should we encourage PIs of too large laboratories in some way like changing metrics? Do you think the large laboratories still play an important role in the community or not?
A: We found that labs with too many members compromise pub quality and education quality. Thus, too big labs should be avoided without reasonable justification. Some types of research require a large team (like genome project, etc.), where large labs may make sense. Such special projects may better be managed separately from ordinary university labs, since they take a different set of managerial skills.
Open Access Journals
Q: Obviously, the author is purposely provocative and want to condemn fee-based open access journals. To put it in different words, he is trying to “scam” them and it worked pretty well.
However, can’t we assume that the huge majority of scientists wouldn’t try to fake their results, and have a certain self-esteem of themselves and hence, are diligently working for the sake of science and not meaninglessly? Dishonest people will always exist in a very small proportion anyway, but it doesn’t mean we should reconsider those fee-based open access journals.
A: You are right. I think that the predatory open-access business is not sustainable after all. Through rumors, we will notice that some journals are not reliable and stop submitting a paper there, and predatory journals will be eliminated eventually. There are indeed reputed open access journals, like PLoS One. The survey is nevertheless valuable in that it proves that quite a large proportion of OA journals are indeed unreliable.
Q: In the article both the notion of a wild wild west and of a trust based environment are mentioned. Have there been similar studies like the mentioned paper and can we see a trend from trust based towards wild wild west, or (likely) is the situation more complicated, but can we still distinguish tendencies toward a certain type of environment and/or behaviour?
A: As the academic sector is becoming larger and more commercialized, it invites various types of predatory businesses. I assume not many of them are sustainable. Scientists can tell fraudulent scientists if they really want to judge it. However, if mutual monitoring does not work well (like countries where the standard of science is still premature), the predatory business may sustain.
Method (Shibayama 2015)
Q: Among 900 PI as data sample, only 377 PI responded to the examination (44% response rate). How do we believe that result based on those sample represent the real situation? People who didn't respond may be the people who do not care about the quality of their publication.
A: Your point is absolutely right. We compared the difference in observable characteristics between respondents and non-respondents and confirm no meaningful difference between them. In this study, the survey did not focus on publication behavior, so I wouldn't be worried about a bias in this regard. By the way, 44% response is quite good when we do social science research.
Methods (Fanelli 2010)
Q: Papers published between 2000 and 2007 were sampled. When the number of papers retrieved from one discipline exceeded 150, papers were selected using a random number generator. Why the need for a Cota of 150 papers from one discipline before using a random method in selecting our sample papers? Why you didn’t use the random sample method from the beginning?
A: It is called stratified sampling. If random-sampling is used for the whole sample, some disciplines could have too few papers. The study wants to have many enough papers in each discipline to evaluate the field effect reliably. The reason for having threshold of 150 is to reduce their workload.
Back to Seminar home