How to protect research ideas as a junior scientist

How to protect research ideas as a junior scientist

I’ve seen PIs ‘steal’ their trainee’s ideas. I have seen trainees ‘steal’ their PI’s ideas. Both parties are hurt by these actions. To avoid this, there should be contracts in place to protect everyone.

One reason scientists get scooped is because we do not instantaneously get our ideas into the world and get credit for them. It takes time to write grant applications, get decisions on whether they have been approved, publish manuscripts and prepare talks, alongside other commitments. It’s a perfect opportunity for someone with more time, power and bandwidth to take your idea and run with it.

This article about idea scooping address a concern for many researchers, but the way it gets discussed makes me uncomfortable. It basically proposes additional intellectual property restrictions for very early-stage ideas to prevent getting your colleagues scooping your ideas for their grant proposals. Scientific ideas are traditionally considered to be a public good (non-rivalrous and non-excludable) and the problem here seems to be that credit for the idea (not the idea itself) is excludable in practice. The public benefit of scientific ideas is already challenged by university’s patenting research results at the publication stage (making use of the idea excludable), and extending IP back to the ideation stage seems like a slippery slope towards limitations on who can do what research (i.e. the patent-troll model might get extended to idea-trolls). It would probably also add a bunch of extra bureaucracy for researchers to go through when ‘protecting’ their ideas and before submitting grants (Universities may be incentivised to create a ‘idea licensing office’ next to the ‘technology transfer office’).

Until processes are put in place to protect people’s ideas, theft will continue to happen. Do not let anyone tell you that you do not deserve to get credit for your ideas and contributions. Your expertise, creativity and innovative ideas are what will make a true impact on the world.

Separately, it’s not clear to me that the researcher who develops the idea is always (or even often) the best person to actually carry out the research. The fact that ideation and execution are very tightly linked in the grant/publication cycle seems like it may lead to inefficient distribution of labour among researchers. (anecdotally, I know of some, usually senior, researchers with more good ideas than they have time to develop into projects). Getting scooped sucks and it’s great to get credit for your ideas, but to me that argues for creating easy ways to publish early stage ideas so they can be credited (Seeds of Science and RIO’s Research Idea article type go in this direction) rather than trying to add extra IP restrictions.

[Somebody mentioned to me that protecting the idea and selling the rights to conduct the research would be how the problem of inefficient labour distribution would get solved in a traditional economic setting, which is an argument that could support IP protection for research ideas.]

Do others have any thoughts on this?

1 Like

What an interesting conversation…

I would like to remark some of the caveats of junior researchers. On the one hand, junior researchers are elated at the idea of contributing to the development of human knowledge with original and inspiring discoveries, and are tempted to demonstrate their skills (almost) immediately. Basically it’s human nature. On the other hand, they must be warned of the unpredictability of the luck in studies so not to subject their own self-esteem to factors that probably do not concern internal abilities. Nor they should be discouraged by adverse events or unethical behaviors.

It’s a pity that this might happen, especially since it’s young minds that are typically able to connect such distant ideas (i.e. a premise for brilliant discoveries).

Several researchers I followed closely were - in my humble opinion - adequately qualified in their fields of study. But only some of them were lucky enough to successfully publish their results in peer-reviewed journals as early stage researchers, and this happened for a multitude of reasons (I can’t even detect properly). Frankly, much of the luck depends on the ability to create strong and loyal bonds with other researchers. It nullifies the responsibility of one’s mental skills as the only way out to achieve remarkable results.

My apparently orthogonal topic is actually a crossroads of ideas to say that any prevention of unethical practices in publishing and editorial process can be prevented with a broad and benevolent commitment of the scientific community, regardless of one’s role.

Paradoxically, such ability to create social bonds and mutual benefits between researchers without any competitive attitude is the utmost important recipe of a new paradigm of scientific research, and this stems from the awareness of the negative repercussions of non-cooperative behaviors. This should be evident as we are experiencing a time of hardship in human relations on a broader level.

Thank you very much for courtesy and appreciation. Comments and critiques are welcome.

1 Like

Welcome to the community @Enrico_Gabriele!

Yes, I agree that a lot of success as an early career researcher is largely down to luck, and having more experienced people who can help you move forward is a big benefit. And this is also concerning as it does not encourage people to work on ‘high-risk/high-reward’ projects early in their career, they need to pick projects they think will work out in order to have something to publish.

such ability to create social bonds and mutual benefits between researchers without any competitive attitude is the utmost important recipe of a new paradigm of scientific research, and this stems from the awareness of the negative repercussions of non-cooperative behaviors.

Well, I think a bit of friendly competition can be helpful, by my impression is that there is far too much competition for scare resources in academic at the moment. Personally, I hope that independent research (which is often done by self-motivated amateurs) will allow a more cooperative environment to develop.

Hi Gavin, Could I correct you here? Independent research is not done by self-motivated amateurs. I assume many of them have a PhD title, and went through several rounds of training. I see the biggest challenge is that funders don’t accept currently IGDORE or that funding is given out only to those who will get a contract in the host country of the funding institution.

Hi Gudrun,

I think we just have a difference in terminology. By amateur, I mean in the sense of doing research while unpaid, not unskilled. I agree that many independent researchers are highly trained and previously worked as professionals (in the sense of being paid to do research). I believe it is perfectly possible, and commendable, to be a trained amateur researcher. And I intended self-motivated to contrast with the career-motivations that dominate the mindset of many professional academic researchers.

But when regarding the point of funding, my terminology does go into a bit of a grey area as I also know some independent researchers who have funding to do their research and many who are looking for it.

This brings to mind a fun paper: Amateur hour: Improving knowledge diversity in psychological and behavioral science by harnessing contributions from amateurs. It discusses how people with a high level of domain expertise can work as amateurs (either independent scientists or outsiders in their framing), and situations when independent researchers might still be considered amateurs, even when receiving financial support. Maybe at @SeedsofScience would like to contribute to the discussion?

2 Likes

This may be relevant to this conversation. The quality and authenticity of research become blurred and questionable when mechanical procedures such as numbers are used to measure and quantify research and academic productivity :man_shrugging:

1 Like

Just thinking of the following article:

A critical issue stems from citing retracted literature, albeit we do actually experience a soaring trend in alternative methods of academic metrics (that I do not find exhaustive at all, since different national authorities may enforce different rules/criteria of journals’ accreditation: say, if some countries’ entities ranks a certain journal as a top-tier one, another country’s authority evaluates it otherwise). Apart of those journals that gain the largest possible cross-country consensus, as they took advantage of being pioneers of some successful scientific branches, we must acknowledge that there are many peer-reviewed journals holding similar soundness and methodological accuracy.

Briefly, the quality of a manuscript depends not on metrics per se, unless we have proven evidence of absence of biases, but depends on the validity of arguments held (e.g. mathematical reasoning has per se validity regardless of its communication channel). But in my humble opinion, I sincerely believe that a highly engaged scientific community (and squeezes “power distance” (see Hofstede) between the top and the bottom) can efficiently promote alternative perspectives on scientific research (that might earn unexpected appreciation from academic stakeholders).

We sometimes mistakenly think that we have rivals, we simply have those who have not yet understood what independent research means (and what it can do).

Independent research goes beyond conventional wisdom, but must have tight and efficient rules to make it socially accepted as a hegemonic trend. Some of the virtuous examples I personally encountered throughout my lifetime have contaminated their own expertise with exogenous stimuli (coming from unexpected situations and/or social contexts).

Its functioning implies a series of requisites: [1] independent researchers must feel themselves as socially accepted by other potential/actual colleagues; [2] independent researchers must be engaged in highly motivated communities that have an internal core group around which turnover is well managed; [3] they work better in those circumstances where they are more likely to change jobs as it is mentally rejuvenating to expose one’s mind to as many different stimuli as possible; [4] their expectations about life must be supported positively and not depressed by adverse events; [5] profit-seeking attitudes must be diverted to achieve win-win outcomes as more as possible.

This is not a manifesto, this is just my own opinion. Again, feel free to comment from any perspective.

1 Like

[1] independent researchers must feel themselves as socially accepted by other potential/actual colleagues; [2] independent researchers must be engaged in highly motivated communities that have an internal core group around which turnover is well managed; [3] they work better in those circumstances where they are more likely to change jobs as it is mentally rejuvenating to expose one’s mind to as many different stimuli as possible; [4] their expectations about life must be supported positively and not depressed by adverse events; [5] profit-seeking attitudes must be diverted to achieve win-win outcomes as more as possible.

This is a good list to work form @Enrico_Gabriele! Personally, I feel that [2] is generally lacking at the moment - IGDORE provides an institutional affiliation for many independent researchers (which helps with [1]) and many are individually motivated, but I don’t think we provide much community support for affiliates (Ronin might do better). It’s something I would love to improve, but we haven’t had the resources yet. Regarding [4], I think this ties into getting an income as an independent researcher, which is hard. We have previously discussed the idea of a UBI for researchers on the forum: Universal basic income: What would it mean for researchers? and A new paradigm for the scientific enterprise: nurturing the ecosystem - #9 by alex.lancaster. While it is exciting, it’s hard to make happen.

Regarding citation practice, I’m not convinced that citations to retracted papers is a major problem, although applying a correction factor to journal metrics for publishing retracted papers might make journals more cautious about publishing flawed work.

1 Like

I would like to focus on the matter of journal metrics as a premise of academic quality… and possibly go beyond the scope of your post…

A pivotal issue of scientific research stems from the scarce attention paid to quality per se without any spurious contamination, because too many experts want to overlap them while ignoring too many aspects… Essentially, we have to choose whether to prioritize inclusiveness and diversity among researchers or to prioritize strict standards of research (net of their pros and cons). Said this, what we label as “inclusive” or as “strict” differs greatly between academic fields of study: [1] STEM disciplines have an intrinsic nature of logical and mechanical thinking that recognizes quality in respecting logic, therefore strict rules are inevitable; [2] Humanistic and social disciplines may find inappropriate restrictions to a one-way logical criterium, and may prefer “thinking-out-of-the-box” attitudes (even at cost of appearing “irrational”), so that it could be referable to inclusiveness.

Ideally, point [1] leads to not requiring any editorial standards as a guide to publish about STEM disciplines, since STEM disciplines cannot survive under a critical level of logic (i.e. a universal principle). Point [2] leads to requiring editorial standards in order to let them survive, because an excessive level of heterogeneity can be harmful to a given line of study.

A flawed “bibliometrics” in scientific/academic research can alter this aforementioned natural equilibrium, so that research on social topics has been transformed in a “limited supermarket of ideas” where an internal group (that was lucky enough of being pioneers not burdened by rules/standards, which it placed without particular efforts for itself but with efforts for future generations) wants to take advantage of their “first-come first-served” position (and perpetuate their initial status by mixing elements of a class into another as a side effect). This undermines the right to similar consideration of competing positions (with comparable probabilities of social acceptance). Dealing with STEM disciplines, we need to prioritize “research policy standards” in place of other indicators (and make them accountable of 70-80% of the “quantifiable value” of any STEM research). For instance, the presence or absence of supplementary materials (and their informative dimension) deserves the utmost consideration when it comes to evaluating scientific research. In my humble opinion, it is more valuable a manuscript that shares original dataset and codes used in a software than an alternative - albeit fascinating - manuscript that does not. It is because the main threat of a scientific discovery is its falsification with uncontrolled results, and then its fragile consistency against replications (e.g. I would discard plenty of studies whose p-values are not significant enough). In addition, STEM disciplines need high expertise and excellent command in techniques/methodologies rather than people who published a lot about almost identical topics (e.g. dissimulated “salami-slicing”).

Again, feel free to comment and criticize.

1 Like

Placing (epistemic) inclusivity and diversity against research standards is an interesting framing. I agree that standards seem easier to apply to logical analysis in STEM, but I think that the replicability crises in many soft-science fields show that this is harder than it looks. In this case, I agree that impact focused bibliometrics has moved the equilibrium away from just relying on honest researchers to publish reliable work towards editorial requirements that try to make sure researchers can only publish reliable work. Following the replication crises I’m inclined to think this is a good response, but it might be that some reforms do impose standards that don’t allow room for diverse research approaches (preregistrations for exploratory research has been one point of discussion).

I can’t really comment much on quality control in the humanities, as that’s well outside my research experience. But I do think that it’s quite common for the founders of a field to impose their standards on the work that follows theirs. I’m not confident that results from bibliometrics though, as founder-effects occur in many other systems as well.

1 Like

No one can dissuade me from thinking that what we are all experiencing depends on digital innovation (i.e. the mastermind of our times) as the protagonist of social and intellectual change. And we are still in a transitory phase of a revolutionary transformation… that not even scientific research can ignore!

And everything related to scientific research is (and will be) committed to “feed this machine”… I apologize if I appear blurred or else, but I am available to explain it better if required.

We were used to living in a world where scientific communities were mostly financed by public fundings, and this gave them the great opportunity to free themselves from “non-strictly scientific” interference (and scientists did not particularly suffer from budgetary and/or performance targets, apart from the personal jealousy of other colleagues). Meanwhile, a larger scientific community has arisen up from all parts of the world thanks to digitalization. Therefore advanced economies struggled for winning such a “brain war” by introducing strict standards (and academic protectorates) that on the one hand preserved high-quality research standards but on the other hand had the premise of their own decline (because scientific communities have reduced their exposure to spurious original ideas).

So what? It happened that we started experiencing p-hacking, data dredging, publication biases, editorial arbitrage, because in the long run any strict rule gets hacked if not updated on time to survive against predatory instincts of competitors (who feel treated as foes, and so are nudged to challenge incumbent actors at all costs).

If such a “brain war” were to erupt (which triggers ideological chaos), those working in the publishing industry and academic research might recognize it as negative to their own business, and therefore resort to accepting more lax rules in exchange of their own survival. This is a crisis but also an opportunity to experiment with new methods of sharing ideas that emerging actors can take advantage of (e.g. preprints may become interesting to the “digital machine” as an alternative to collapsing under the blows of “brain war” that triggers a reduced exchange of ideas, of feedback, and in turn of publications).

This could turn into an opportunity to propose a global “auction of ideas” from which both researchers and the “machine” may start to sharing and interacting each other. This is because on the one hand digital innovation is real, and on the other hand we have to manage it accurately (and pragmatically).