======================================== Questions not addressed during the class ======================================== ## Some of the questions below may not be exactly what you asked, for I summarized similar questions. ## Some questions may be skipped. This happens (1) when answering the question needed some clarification or contexts behind the question, (2) when a similar question was asked in the class, or (3) when the presentation essentially answered the question. ## If you find your questions unanswered or if you have follow-up questions, please let me know. Q1: Ultimately, the study shows clearly that the "publish or die" incentive passed down through various innovation systems around the world has not necessarily increased the quality of research so much as the quantity of publication. Should we entirely discard publication as a measure of research intensity / impact, or at the most as one factor in a multivariate analysis, like a web analysis? Or considering the relative easiness with which information on publication is gathered, could publication still be the major measure in an innovation system, given a deepened approached of the publications characteristics. A1: Most scientists probably agree that evaluation should be based on the contents of publications (but not on the metrics such as citation or impact factor), and that is done by peer review. Obviously, this is costly and takes time. So, comparing the cost and benefit of peer review and metrics-based evaluation, the former may be chosen when the stake is high and the latter is chosen otherwise. Q2: It seems like PhD students and researchers in general are well aware of the quality of their research already before submission of papers to journals. Some avoid publishing papers that have a low impact factor, while others obviously don't care and publish regardless of impact factor. Wouldn't it be beneficial for universities to have a policy that states that only papers with a certain degree of impact should be published, in order to improve its reputation? A2: Practically, that has been done in some universities and in some countries. They explicitly mention that only specific journals are considered for promotion and evaluation. This may improve reputation only if efforts are shifted toward higher-impact research. As stated for other questions, this also has some side effects, such as biasing research subjects. Q3: While educational institutes hold a norm of 3 year graduation for doctoral students in Japan, many other countries have higher criterion for doctoral students to graduate (e.g. publications count with an IF requirement). Do you think a shift in educational policy could help relief the case of low-quality by-product publication? A3: I think that changing educational policy (e.g. requirement for graduation) will change the publication practice, but that may or may not be any better. Constraining IF of journal, for example, may affect the choice of subjects, or could induce misconduct. Evaluation based on stricter peer review seems ideal, but that works only with high-quality and devoting evaluators, and it could end up producing even lower-quality PhDs with no pub record. Q4: In the figure 2 "Portfolio of publication quality by country" of the paper, there is clearly a difference of the number of articles with low IF between Japan and other developed countries. Considering the similar funding & studying levels, how could that difference happened? If those articles with low IF were mostly published for the purpose of student training, is it considerable that the standard of evaluating students in a Japanese university seems comparatively lower than that in other developed countries? A4: From empirical results, I think 2 peculiar factors are at work in Japan: (1) linguistic/cultural distance from the Western world (e.g., pub in Japanese domestic journals) and (2) student pubs. As for (2), postgraduate education in Japan has some issues. One relevant point is that the government has increased PhD quota to increase PhD holders to the standard of Western countries, which essentially lowers the entrance threshold. A big difference from, for example, US system is that PhD students rarely drop out in Japan; i.e., there is no selection mechanism during PhD courses. Q5: You mentioned that returnees from foreign countries tend to scrap papers that are rejected or likely to be rejected by high impact journals. But wouldn't it be more beneficial to society if they published such papers in low impact journals (even though it may hurt their reputation) rather than scrap them completely? A5: Yes. We argue that policies emphasizing too much on prestige could compromise the norm of open science. Q6: You mentioned a phenomenon that returnees from foreign experience particularly from the US avoid publishing papers in low-impact journals and tend to scrap papers that are rejected or likely to be rejected by high-impact journals. Considering the case of US, what differences on policies make that happen? A6: Relevant factors at work strongly in the US may be competition and career filter based on pub record (evaluation in tenure track system). Under the situation, scientists are forced to efficient in their pub strategy. Q7: What could be done, in your opinion, to encourage, in particular researches returning from foreign experience, to publish their papers even if they are of lower quality but still with scientific contribution of some level? A7: Externally incentivizing them will be difficult. If we rewarded based on pub count, more low-impact papers may be published but that will probably decrease quality of pubs. What is important is that scientific community shares a standard or a value of publication that appreciates some sort of pubs of low impact but of some scientific relevance. Q8: As you mentioned, the returnees seem to be unwilling to publish in low-IF journals, in other words the proportions of low quality publications are relatively low in their cases. What do you think is the main reason of this? Personally, I come up with a reason that when returnees are working abroad, as foreigners, they need to do more high quality researches and publish more high quality papers to prove their abilities. Otherwise, they might be looked down by counterparts. After back to Japan, they just keep this kind of behavior. Do you think it makes sense? A8: Yes, I agree with your story. Q9: Could it be that high funding is not actually a cause for poor quality publication? My intuition is that high funding is rather a consequence of the high productivity under the current policy of funding the most productive academics. I think the number of PhD Students has a bigger effect. Professors make a big effort to get a faculty position in a highly ranked University. But after that, their agenda becomes too loaded because they need at the same time to do non research mission, and engage in social relations so they have less time to work with their assigned students. I remember when I got the embassy recommendation and started to apply to laboratories, some student in Japan advised me to apply to Associate Professors and reason was: "they are more likely to give you a better following to your research work". Moreover, I also see students choose to study in lower ranking universities just to have a close relationship with their supervisor but then realize the gap between their condition and the one of students in highly ranked universities and then somewhat lose motivation. After getting a full professorship, a professor has the new role of creating next generation of researcher. He need, not only to manage his grants efficiently but also to manage his workforce efficiently. A9: First, the direction of causality is difficult to know. With the current setting of econometrics, it is not provable which direction is real. More time-series and controlled design is needed. You are right. In the Japanese system, full professors are forced to engage in non-research jobs, which should lower the quality of research; in fact, the result shows that the administrative engagement decreases quality. I agree with your argument on the lab setting and professors' strategies. The main argument of the paper is that some professors aim at expanding a lab, and this causes under-supply of supervisory effort per student. Q10: My second question is regarding the definition of low respectively high quality papers. Is it not dangerous to use only the number of citations as a measurement? For example, if a famous professor publishes a paper of very low quality, the paper might still be cited a large number of times simple to add weight to onefs own paper by having his name in onefs references. This would accordingly make his paper a so called high quality paper but on false grounds. A10: Yes, you are absolutely right. Citation is not a perfect measure with many limitations, including the one you mentioned. My paper uses citation and IF assuming that statistically they can be used as some proxy of quality or impact (, which can be controversial, though), but more caution is needed when we evaluate a single paper. Q11: It goes without saying that academic scientists should publish as often as possible in journals of the highest profile possible. However, should they consider the timing of publishing? When is there "enough" data to publish a paper? Should they publish all the data they have? A11: From a scientist's perspective, they are always considering a trade-off between publishing quickly but with rather premature data vs. publishing thorough data that takes time to collect. Obviously, the latter will give greater impact, but there is a risk that someone else who is doing a similar research publishes beforehand. So, scientists strategically decide the timing and thoroughness of their publications. From social perspective, quick dissemination (like reporting a small piece of result every time it is obtained) may be the best in terms of the amount of findings shared among community. But, this will increase the cost of information processing; scientists always have to go through all the small papers and integrate the information. In principle, quick publication is the norm, but I think it is accepted (and probably considered better) to take time to accumulate enough data so that meaningful interpretation is possible.