Publishing research using outdated methodsExperiences with Scientific Research Publishing (SCIRP)...
Does paint affect EMI ability of enclosure?
Strange Sign on Lab Door
Dilemma of explaining to interviewer that he is the reason for declining second interview
Are there any modern advantages of a fire piston?
Why do neural networks need so many training examples to perform?
Can you tell from a blurry photo if focus was too close or too far?
Caruana vs Carlsen game 10 (WCC) why not 18...Nxb6?
Why is mind meld hard for T'pol in Star Trek: Enterprise?
Explain the objections to these measures against human trafficking
CREATE ASSEMBLY System.DirectoryServices.AccountManagement.dll without enabling TRUSTWORTHY
In Linux what happens if 1000 files in a directory are moved to another location while another 300 files were added to the source directory?
How can I get my players to come to the game session after agreeing to a date?
What are "industrial chops"?
Avoiding morning and evening handshakes
Finding a mistake using Mayer-Vietoris
Is it a fallacy if someone claims they need an explanation for every word of your argument to the point where they don't understand common terms?
Can a person refuse a presidential pardon?
How to prevent cleaner from hanging my lock screen in Ubuntu 16.04
Which one of these password policies is more secure?
How to solve a large system of linear algebra?
Who is this Ant Woman character in this image alongside the Wasp?
How do Chazal know that the descendants of a Mamzer may never marry into the general populace?
Could a phylactery of a lich be a mirror or does it have to be a box?
Why exactly do action photographers need high fps burst cameras?
Publishing research using outdated methods
Experiences with Scientific Research Publishing (SCIRP) journalsPublishing PhD thesis at another university due to not being able to afford high publishing costs?How to select a publishing venue for interdisciplinary research?Why are some journals for research papers called “review”?Why is Scientific publishing methods at stagnation in this age of internet?Scientific Research Publishing - Unspoken RulesBest options for publishing original research without being in academia?Using a template and no publishingWhat to bear in mind while publishing a research article in SCI journal?Publishing several research papers in the same journal
I'm currently an Economics MA student doing a referee report on a paper that employs a dated empirical method. This particular method was originally created in 1980 but then improved upon in 1998 by other researchers, after they had discovered certain issues with it.
This got me thinking about how exactly those on the cutting edge of research seem to lag behind by over a decade (or more) in method and still manage to get published.
This is concerning because it shows that published researchers make mistakes and don't review all relevant research before publishing. I can imagine that in the hard sciences and medicine this would happen also, which is especially concerning knowing that those on the cutting edge could be decades behind in knowledge which has been around for a while.
Is this acceptable? If so, why?
journals academic-history
|
show 7 more comments
I'm currently an Economics MA student doing a referee report on a paper that employs a dated empirical method. This particular method was originally created in 1980 but then improved upon in 1998 by other researchers, after they had discovered certain issues with it.
This got me thinking about how exactly those on the cutting edge of research seem to lag behind by over a decade (or more) in method and still manage to get published.
This is concerning because it shows that published researchers make mistakes and don't review all relevant research before publishing. I can imagine that in the hard sciences and medicine this would happen also, which is especially concerning knowing that those on the cutting edge could be decades behind in knowledge which has been around for a while.
Is this acceptable? If so, why?
journals academic-history
9
Oh, you meant economics? I thought you were talking about science and research.
– Eric Duminil
14 hours ago
1
It's not clear to me why the method is "outdated" in this specific case. Could you elaborate on this? So far you only mention that it is from 1998, that means it is 21 years old, but that does not mean that it is outdated.
– Spectrosaurus
11 hours ago
1
I remember at the same stage of my career being shocked at the mediocrity of a lot of what gets published. I'm no longer shocked: convinced, instead, that 90% of academic researchers could be more usefully employed doing something other than academic research.
– Michael Kay
9 hours ago
1
@MichaelKay 95% of research is crap. The reward from the 5% makes it worth it. Basic research is always high-risk high-reward, and that's why the commercial sector can't even come close to doing it. There's no way of telling whether something crap or game-changing until its done, or even many decades after its done. Somebody published obviously wrong data with a wrong explanation, and 16 years later, got the Nobel Prize.
– user71659
9 hours ago
2
@MichaelKay: I think the "publish or perish" mentality plays a large role. Academia would be a better place if people only wrote papers when they have something interesting to say.
– Eric Duminil
9 hours ago
|
show 7 more comments
I'm currently an Economics MA student doing a referee report on a paper that employs a dated empirical method. This particular method was originally created in 1980 but then improved upon in 1998 by other researchers, after they had discovered certain issues with it.
This got me thinking about how exactly those on the cutting edge of research seem to lag behind by over a decade (or more) in method and still manage to get published.
This is concerning because it shows that published researchers make mistakes and don't review all relevant research before publishing. I can imagine that in the hard sciences and medicine this would happen also, which is especially concerning knowing that those on the cutting edge could be decades behind in knowledge which has been around for a while.
Is this acceptable? If so, why?
journals academic-history
I'm currently an Economics MA student doing a referee report on a paper that employs a dated empirical method. This particular method was originally created in 1980 but then improved upon in 1998 by other researchers, after they had discovered certain issues with it.
This got me thinking about how exactly those on the cutting edge of research seem to lag behind by over a decade (or more) in method and still manage to get published.
This is concerning because it shows that published researchers make mistakes and don't review all relevant research before publishing. I can imagine that in the hard sciences and medicine this would happen also, which is especially concerning knowing that those on the cutting edge could be decades behind in knowledge which has been around for a while.
Is this acceptable? If so, why?
journals academic-history
journals academic-history
edited 17 hours ago
eykanal♦
42.1k15101206
42.1k15101206
asked yesterday
EconJohnEconJohn
23918
23918
9
Oh, you meant economics? I thought you were talking about science and research.
– Eric Duminil
14 hours ago
1
It's not clear to me why the method is "outdated" in this specific case. Could you elaborate on this? So far you only mention that it is from 1998, that means it is 21 years old, but that does not mean that it is outdated.
– Spectrosaurus
11 hours ago
1
I remember at the same stage of my career being shocked at the mediocrity of a lot of what gets published. I'm no longer shocked: convinced, instead, that 90% of academic researchers could be more usefully employed doing something other than academic research.
– Michael Kay
9 hours ago
1
@MichaelKay 95% of research is crap. The reward from the 5% makes it worth it. Basic research is always high-risk high-reward, and that's why the commercial sector can't even come close to doing it. There's no way of telling whether something crap or game-changing until its done, or even many decades after its done. Somebody published obviously wrong data with a wrong explanation, and 16 years later, got the Nobel Prize.
– user71659
9 hours ago
2
@MichaelKay: I think the "publish or perish" mentality plays a large role. Academia would be a better place if people only wrote papers when they have something interesting to say.
– Eric Duminil
9 hours ago
|
show 7 more comments
9
Oh, you meant economics? I thought you were talking about science and research.
– Eric Duminil
14 hours ago
1
It's not clear to me why the method is "outdated" in this specific case. Could you elaborate on this? So far you only mention that it is from 1998, that means it is 21 years old, but that does not mean that it is outdated.
– Spectrosaurus
11 hours ago
1
I remember at the same stage of my career being shocked at the mediocrity of a lot of what gets published. I'm no longer shocked: convinced, instead, that 90% of academic researchers could be more usefully employed doing something other than academic research.
– Michael Kay
9 hours ago
1
@MichaelKay 95% of research is crap. The reward from the 5% makes it worth it. Basic research is always high-risk high-reward, and that's why the commercial sector can't even come close to doing it. There's no way of telling whether something crap or game-changing until its done, or even many decades after its done. Somebody published obviously wrong data with a wrong explanation, and 16 years later, got the Nobel Prize.
– user71659
9 hours ago
2
@MichaelKay: I think the "publish or perish" mentality plays a large role. Academia would be a better place if people only wrote papers when they have something interesting to say.
– Eric Duminil
9 hours ago
9
9
Oh, you meant economics? I thought you were talking about science and research.
– Eric Duminil
14 hours ago
Oh, you meant economics? I thought you were talking about science and research.
– Eric Duminil
14 hours ago
1
1
It's not clear to me why the method is "outdated" in this specific case. Could you elaborate on this? So far you only mention that it is from 1998, that means it is 21 years old, but that does not mean that it is outdated.
– Spectrosaurus
11 hours ago
It's not clear to me why the method is "outdated" in this specific case. Could you elaborate on this? So far you only mention that it is from 1998, that means it is 21 years old, but that does not mean that it is outdated.
– Spectrosaurus
11 hours ago
1
1
I remember at the same stage of my career being shocked at the mediocrity of a lot of what gets published. I'm no longer shocked: convinced, instead, that 90% of academic researchers could be more usefully employed doing something other than academic research.
– Michael Kay
9 hours ago
I remember at the same stage of my career being shocked at the mediocrity of a lot of what gets published. I'm no longer shocked: convinced, instead, that 90% of academic researchers could be more usefully employed doing something other than academic research.
– Michael Kay
9 hours ago
1
1
@MichaelKay 95% of research is crap. The reward from the 5% makes it worth it. Basic research is always high-risk high-reward, and that's why the commercial sector can't even come close to doing it. There's no way of telling whether something crap or game-changing until its done, or even many decades after its done. Somebody published obviously wrong data with a wrong explanation, and 16 years later, got the Nobel Prize.
– user71659
9 hours ago
@MichaelKay 95% of research is crap. The reward from the 5% makes it worth it. Basic research is always high-risk high-reward, and that's why the commercial sector can't even come close to doing it. There's no way of telling whether something crap or game-changing until its done, or even many decades after its done. Somebody published obviously wrong data with a wrong explanation, and 16 years later, got the Nobel Prize.
– user71659
9 hours ago
2
2
@MichaelKay: I think the "publish or perish" mentality plays a large role. Academia would be a better place if people only wrote papers when they have something interesting to say.
– Eric Duminil
9 hours ago
@MichaelKay: I think the "publish or perish" mentality plays a large role. Academia would be a better place if people only wrote papers when they have something interesting to say.
– Eric Duminil
9 hours ago
|
show 7 more comments
5 Answers
5
active
oldest
votes
What you call "an outdated method" another may call "the well-understood method".
In neuroscience, this is a very common occurrence. There are new techniques for analyzing different types of neural recordings coming out each month in a number of journals, and each one aims to improve on a specific aspect of a predecessor. Unfortunately, the new techniques are exactly that—new—and therefore untested against lots of data with different initial conditions. There are a good number of researchers who will simply ignore all the new techniques until people have developed them to a place of comfort. Even for those that do gain acceptance, they may not be appropriate for every type of analysis1.
I'm unfamiliar with your specific case, but I have seen similar concepts elsewhere in Econ, where older published techniques remain highly popular because (1) they're well-understood and (2) the new techniques were created to fix problems that not present in all databases, or not relevant for a given analysis. The old fogies sometimes do have something to offer.
1 In one case, a technique called DCM became widely popular in a very short period of time, and consequently was very quickly becoming widely misused. It got so bad that the authors actually published a paper titled "Ten simple rules for dynamic causal modeling" with the goal of educating researchers how to use the technique. (Biomed researchers in general don't have a great track record of performing world-class data analysis, but thats a separate story...)
This would probably be better as a comment, as it doesn't answer the question, but challenges its premise.
– holla
13 hours ago
5
@holla: The question asks why it would be acceptable to use older methods. This answer says why it is acceptable to use older methods. How does it not answer the question?
– kundor
10 hours ago
@kundor: Because the question states that it is about a dated method with known issues. This answer reads like "there are also unproblematic/more suitable old methods and sometimes the newer methods are worse". Which is, of course, true -- but the question is about old methods which are for given problems proven to be worse than new methods.
– holla
10 hours ago
@holla, see my other comment, but the question does not state what you are saying it does. It does not describe a "dated method with known issues", it describes a "dated method with known issues that were then fixed in 1998". That's a huge difference and makes it very ambiguous what the actual problem is here.
– Spectrosaurus
8 hours ago
@Spectrosaurus: I read that in 1998, researchers created a new method because of the issues, but the researchers in questions still use the bad version (and provide no justification). Otherwise, the question would be trivial: what should be the problem with an old method without issues?
– holla
8 hours ago
|
show 1 more comment
What you are describing is not uncommon. In my field people still use methods developed 50 years ago. Some of these methods are still valid and have proven to be robust, some of these are flawed with known improvement, and some of these are down right logically inconsistent but people still use them because of inertia.
Whether using an outdated method is a critical flaw in a paper depends on many factors. But it eventually comes down to whether the flaw in the method invalidates the main conclusion. For example, if the main result is qualitative, and the improvement from the new method is incremental, then it's not a big deal. If the result is supported by multiple lines of evidence, then the fact that one of them is flawed is then less severe of a problem. If the method is known to fail in special cases and it is clear that the data do not fall into such cases, then it is also not a big concern.
Overall, for better or worse, people are going to be more forgiving if the newer method is not well known or the improvement is marginal.
7
This reminds me of an occasion where a university Vice Chancellor told a department head that they need to update their programme and not continue teaching concepts that are hundreds of years old... such as the methods developed by Newton, Fourier, Leibniz, etc.
– Mick
22 hours ago
1
Using 50-year-old methods that work well is not so bad as publishing papers that claim to have rediscovered them. I've seen that more than once in my own field. It's easy to "forgive" students for not accessing literature that was only ever published on physical paper, but referees should know better IMO.
– alephzero
20 hours ago
7
@alephzero You mean like this paper which invented integration in 1994?
– David Richerby
20 hours ago
1
@Mick Please tell me you're joking. OT: The validity of research is not necessarily dictated by the up-to-dateness of the methods. A lot of current research in natural sciences is based on methods from the 60's which were deemed unfeasible but are now used because raw comp. power is available which makes them feasible (and the methods are easy to comprehend).
– Nox
17 hours ago
add a comment |
I'm currently ... doing a referee report on a paper... [Author did X] Is this acceptable?
You're the referee, so you tell us!
As a referee you have the authority to use your discretion here and decide what kind of recommendation you want to give to the editor. You have identified that the authors use an outdated method of analysis that has some problems highlighted in later literature. You should point this out in your review, and you will then need to decide how big of an issue this is. Is the old method sufficiently poor that the method should be revised to the improved method from 1998? If so then perhaps a revise and resubmit might be appropriate (assuming other aspects of the paper are okay).
add a comment |
This got me thinking about how exactly those on the cutting edge of
research seem to lag behind by over a decade (or more) in method and
still manage to get published.
This is not at all uncommon. It happens to many well-known techniques too.
Symbolic execution was invented in 1976. But it had been dead for decades until being resurrected around 2005 (thanks to significant advances in constraint solving). Now, it is popular, used in Google, Microsoft, NASA etc. All winning teams in DARPA Cyber Grand Challenge used it, the top team was bought by the Pentagon. What a comeback.
Similar story about neural network, it was crashed to dead by SVM (with kernel methods) years ago. It is now resurrected with a new fancy name: deep learning.
I am not sure if I understand this correctly. Your examples read more like "somebody rediscovered old methods and explained why they are better than the new ones", while the question reads to me as "somebody used an old methods while the researchers in the field know the new method is better and provided no justification, as if they did not hear about the new methods". Am I wrong?
– holla
10 hours ago
add a comment |
I would say that in it's own right, that a better method exists does not, and should not, invalidate research.
It might be worth noting that if: a better method exists, has been used, and provides strictly better results that an outdated method does little or nothing to improve upon, then that's a different story.
To reiterate, I would be very uncomfortable citing "could have done better", on it's own, as a rebuttal.
For what its worth, my field mostly involves computational modelling and new methods are a frequent occurrence. The entire field only ever publishing with the latest and greatest methods would be almost inconceivable, and perhaps that effects my opinion more than it should in other fields.
In this generality, I disagree. While using the old, problematic method is still "research", it doesn't have to be "interesting research", i.e. publishable.
– holla
12 hours ago
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "415"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2facademia.stackexchange.com%2fquestions%2f125685%2fpublishing-research-using-outdated-methods%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
5 Answers
5
active
oldest
votes
5 Answers
5
active
oldest
votes
active
oldest
votes
active
oldest
votes
What you call "an outdated method" another may call "the well-understood method".
In neuroscience, this is a very common occurrence. There are new techniques for analyzing different types of neural recordings coming out each month in a number of journals, and each one aims to improve on a specific aspect of a predecessor. Unfortunately, the new techniques are exactly that—new—and therefore untested against lots of data with different initial conditions. There are a good number of researchers who will simply ignore all the new techniques until people have developed them to a place of comfort. Even for those that do gain acceptance, they may not be appropriate for every type of analysis1.
I'm unfamiliar with your specific case, but I have seen similar concepts elsewhere in Econ, where older published techniques remain highly popular because (1) they're well-understood and (2) the new techniques were created to fix problems that not present in all databases, or not relevant for a given analysis. The old fogies sometimes do have something to offer.
1 In one case, a technique called DCM became widely popular in a very short period of time, and consequently was very quickly becoming widely misused. It got so bad that the authors actually published a paper titled "Ten simple rules for dynamic causal modeling" with the goal of educating researchers how to use the technique. (Biomed researchers in general don't have a great track record of performing world-class data analysis, but thats a separate story...)
This would probably be better as a comment, as it doesn't answer the question, but challenges its premise.
– holla
13 hours ago
5
@holla: The question asks why it would be acceptable to use older methods. This answer says why it is acceptable to use older methods. How does it not answer the question?
– kundor
10 hours ago
@kundor: Because the question states that it is about a dated method with known issues. This answer reads like "there are also unproblematic/more suitable old methods and sometimes the newer methods are worse". Which is, of course, true -- but the question is about old methods which are for given problems proven to be worse than new methods.
– holla
10 hours ago
@holla, see my other comment, but the question does not state what you are saying it does. It does not describe a "dated method with known issues", it describes a "dated method with known issues that were then fixed in 1998". That's a huge difference and makes it very ambiguous what the actual problem is here.
– Spectrosaurus
8 hours ago
@Spectrosaurus: I read that in 1998, researchers created a new method because of the issues, but the researchers in questions still use the bad version (and provide no justification). Otherwise, the question would be trivial: what should be the problem with an old method without issues?
– holla
8 hours ago
|
show 1 more comment
What you call "an outdated method" another may call "the well-understood method".
In neuroscience, this is a very common occurrence. There are new techniques for analyzing different types of neural recordings coming out each month in a number of journals, and each one aims to improve on a specific aspect of a predecessor. Unfortunately, the new techniques are exactly that—new—and therefore untested against lots of data with different initial conditions. There are a good number of researchers who will simply ignore all the new techniques until people have developed them to a place of comfort. Even for those that do gain acceptance, they may not be appropriate for every type of analysis1.
I'm unfamiliar with your specific case, but I have seen similar concepts elsewhere in Econ, where older published techniques remain highly popular because (1) they're well-understood and (2) the new techniques were created to fix problems that not present in all databases, or not relevant for a given analysis. The old fogies sometimes do have something to offer.
1 In one case, a technique called DCM became widely popular in a very short period of time, and consequently was very quickly becoming widely misused. It got so bad that the authors actually published a paper titled "Ten simple rules for dynamic causal modeling" with the goal of educating researchers how to use the technique. (Biomed researchers in general don't have a great track record of performing world-class data analysis, but thats a separate story...)
This would probably be better as a comment, as it doesn't answer the question, but challenges its premise.
– holla
13 hours ago
5
@holla: The question asks why it would be acceptable to use older methods. This answer says why it is acceptable to use older methods. How does it not answer the question?
– kundor
10 hours ago
@kundor: Because the question states that it is about a dated method with known issues. This answer reads like "there are also unproblematic/more suitable old methods and sometimes the newer methods are worse". Which is, of course, true -- but the question is about old methods which are for given problems proven to be worse than new methods.
– holla
10 hours ago
@holla, see my other comment, but the question does not state what you are saying it does. It does not describe a "dated method with known issues", it describes a "dated method with known issues that were then fixed in 1998". That's a huge difference and makes it very ambiguous what the actual problem is here.
– Spectrosaurus
8 hours ago
@Spectrosaurus: I read that in 1998, researchers created a new method because of the issues, but the researchers in questions still use the bad version (and provide no justification). Otherwise, the question would be trivial: what should be the problem with an old method without issues?
– holla
8 hours ago
|
show 1 more comment
What you call "an outdated method" another may call "the well-understood method".
In neuroscience, this is a very common occurrence. There are new techniques for analyzing different types of neural recordings coming out each month in a number of journals, and each one aims to improve on a specific aspect of a predecessor. Unfortunately, the new techniques are exactly that—new—and therefore untested against lots of data with different initial conditions. There are a good number of researchers who will simply ignore all the new techniques until people have developed them to a place of comfort. Even for those that do gain acceptance, they may not be appropriate for every type of analysis1.
I'm unfamiliar with your specific case, but I have seen similar concepts elsewhere in Econ, where older published techniques remain highly popular because (1) they're well-understood and (2) the new techniques were created to fix problems that not present in all databases, or not relevant for a given analysis. The old fogies sometimes do have something to offer.
1 In one case, a technique called DCM became widely popular in a very short period of time, and consequently was very quickly becoming widely misused. It got so bad that the authors actually published a paper titled "Ten simple rules for dynamic causal modeling" with the goal of educating researchers how to use the technique. (Biomed researchers in general don't have a great track record of performing world-class data analysis, but thats a separate story...)
What you call "an outdated method" another may call "the well-understood method".
In neuroscience, this is a very common occurrence. There are new techniques for analyzing different types of neural recordings coming out each month in a number of journals, and each one aims to improve on a specific aspect of a predecessor. Unfortunately, the new techniques are exactly that—new—and therefore untested against lots of data with different initial conditions. There are a good number of researchers who will simply ignore all the new techniques until people have developed them to a place of comfort. Even for those that do gain acceptance, they may not be appropriate for every type of analysis1.
I'm unfamiliar with your specific case, but I have seen similar concepts elsewhere in Econ, where older published techniques remain highly popular because (1) they're well-understood and (2) the new techniques were created to fix problems that not present in all databases, or not relevant for a given analysis. The old fogies sometimes do have something to offer.
1 In one case, a technique called DCM became widely popular in a very short period of time, and consequently was very quickly becoming widely misused. It got so bad that the authors actually published a paper titled "Ten simple rules for dynamic causal modeling" with the goal of educating researchers how to use the technique. (Biomed researchers in general don't have a great track record of performing world-class data analysis, but thats a separate story...)
edited yesterday
answered yesterday
eykanal♦eykanal
42.1k15101206
42.1k15101206
This would probably be better as a comment, as it doesn't answer the question, but challenges its premise.
– holla
13 hours ago
5
@holla: The question asks why it would be acceptable to use older methods. This answer says why it is acceptable to use older methods. How does it not answer the question?
– kundor
10 hours ago
@kundor: Because the question states that it is about a dated method with known issues. This answer reads like "there are also unproblematic/more suitable old methods and sometimes the newer methods are worse". Which is, of course, true -- but the question is about old methods which are for given problems proven to be worse than new methods.
– holla
10 hours ago
@holla, see my other comment, but the question does not state what you are saying it does. It does not describe a "dated method with known issues", it describes a "dated method with known issues that were then fixed in 1998". That's a huge difference and makes it very ambiguous what the actual problem is here.
– Spectrosaurus
8 hours ago
@Spectrosaurus: I read that in 1998, researchers created a new method because of the issues, but the researchers in questions still use the bad version (and provide no justification). Otherwise, the question would be trivial: what should be the problem with an old method without issues?
– holla
8 hours ago
|
show 1 more comment
This would probably be better as a comment, as it doesn't answer the question, but challenges its premise.
– holla
13 hours ago
5
@holla: The question asks why it would be acceptable to use older methods. This answer says why it is acceptable to use older methods. How does it not answer the question?
– kundor
10 hours ago
@kundor: Because the question states that it is about a dated method with known issues. This answer reads like "there are also unproblematic/more suitable old methods and sometimes the newer methods are worse". Which is, of course, true -- but the question is about old methods which are for given problems proven to be worse than new methods.
– holla
10 hours ago
@holla, see my other comment, but the question does not state what you are saying it does. It does not describe a "dated method with known issues", it describes a "dated method with known issues that were then fixed in 1998". That's a huge difference and makes it very ambiguous what the actual problem is here.
– Spectrosaurus
8 hours ago
@Spectrosaurus: I read that in 1998, researchers created a new method because of the issues, but the researchers in questions still use the bad version (and provide no justification). Otherwise, the question would be trivial: what should be the problem with an old method without issues?
– holla
8 hours ago
This would probably be better as a comment, as it doesn't answer the question, but challenges its premise.
– holla
13 hours ago
This would probably be better as a comment, as it doesn't answer the question, but challenges its premise.
– holla
13 hours ago
5
5
@holla: The question asks why it would be acceptable to use older methods. This answer says why it is acceptable to use older methods. How does it not answer the question?
– kundor
10 hours ago
@holla: The question asks why it would be acceptable to use older methods. This answer says why it is acceptable to use older methods. How does it not answer the question?
– kundor
10 hours ago
@kundor: Because the question states that it is about a dated method with known issues. This answer reads like "there are also unproblematic/more suitable old methods and sometimes the newer methods are worse". Which is, of course, true -- but the question is about old methods which are for given problems proven to be worse than new methods.
– holla
10 hours ago
@kundor: Because the question states that it is about a dated method with known issues. This answer reads like "there are also unproblematic/more suitable old methods and sometimes the newer methods are worse". Which is, of course, true -- but the question is about old methods which are for given problems proven to be worse than new methods.
– holla
10 hours ago
@holla, see my other comment, but the question does not state what you are saying it does. It does not describe a "dated method with known issues", it describes a "dated method with known issues that were then fixed in 1998". That's a huge difference and makes it very ambiguous what the actual problem is here.
– Spectrosaurus
8 hours ago
@holla, see my other comment, but the question does not state what you are saying it does. It does not describe a "dated method with known issues", it describes a "dated method with known issues that were then fixed in 1998". That's a huge difference and makes it very ambiguous what the actual problem is here.
– Spectrosaurus
8 hours ago
@Spectrosaurus: I read that in 1998, researchers created a new method because of the issues, but the researchers in questions still use the bad version (and provide no justification). Otherwise, the question would be trivial: what should be the problem with an old method without issues?
– holla
8 hours ago
@Spectrosaurus: I read that in 1998, researchers created a new method because of the issues, but the researchers in questions still use the bad version (and provide no justification). Otherwise, the question would be trivial: what should be the problem with an old method without issues?
– holla
8 hours ago
|
show 1 more comment
What you are describing is not uncommon. In my field people still use methods developed 50 years ago. Some of these methods are still valid and have proven to be robust, some of these are flawed with known improvement, and some of these are down right logically inconsistent but people still use them because of inertia.
Whether using an outdated method is a critical flaw in a paper depends on many factors. But it eventually comes down to whether the flaw in the method invalidates the main conclusion. For example, if the main result is qualitative, and the improvement from the new method is incremental, then it's not a big deal. If the result is supported by multiple lines of evidence, then the fact that one of them is flawed is then less severe of a problem. If the method is known to fail in special cases and it is clear that the data do not fall into such cases, then it is also not a big concern.
Overall, for better or worse, people are going to be more forgiving if the newer method is not well known or the improvement is marginal.
7
This reminds me of an occasion where a university Vice Chancellor told a department head that they need to update their programme and not continue teaching concepts that are hundreds of years old... such as the methods developed by Newton, Fourier, Leibniz, etc.
– Mick
22 hours ago
1
Using 50-year-old methods that work well is not so bad as publishing papers that claim to have rediscovered them. I've seen that more than once in my own field. It's easy to "forgive" students for not accessing literature that was only ever published on physical paper, but referees should know better IMO.
– alephzero
20 hours ago
7
@alephzero You mean like this paper which invented integration in 1994?
– David Richerby
20 hours ago
1
@Mick Please tell me you're joking. OT: The validity of research is not necessarily dictated by the up-to-dateness of the methods. A lot of current research in natural sciences is based on methods from the 60's which were deemed unfeasible but are now used because raw comp. power is available which makes them feasible (and the methods are easy to comprehend).
– Nox
17 hours ago
add a comment |
What you are describing is not uncommon. In my field people still use methods developed 50 years ago. Some of these methods are still valid and have proven to be robust, some of these are flawed with known improvement, and some of these are down right logically inconsistent but people still use them because of inertia.
Whether using an outdated method is a critical flaw in a paper depends on many factors. But it eventually comes down to whether the flaw in the method invalidates the main conclusion. For example, if the main result is qualitative, and the improvement from the new method is incremental, then it's not a big deal. If the result is supported by multiple lines of evidence, then the fact that one of them is flawed is then less severe of a problem. If the method is known to fail in special cases and it is clear that the data do not fall into such cases, then it is also not a big concern.
Overall, for better or worse, people are going to be more forgiving if the newer method is not well known or the improvement is marginal.
7
This reminds me of an occasion where a university Vice Chancellor told a department head that they need to update their programme and not continue teaching concepts that are hundreds of years old... such as the methods developed by Newton, Fourier, Leibniz, etc.
– Mick
22 hours ago
1
Using 50-year-old methods that work well is not so bad as publishing papers that claim to have rediscovered them. I've seen that more than once in my own field. It's easy to "forgive" students for not accessing literature that was only ever published on physical paper, but referees should know better IMO.
– alephzero
20 hours ago
7
@alephzero You mean like this paper which invented integration in 1994?
– David Richerby
20 hours ago
1
@Mick Please tell me you're joking. OT: The validity of research is not necessarily dictated by the up-to-dateness of the methods. A lot of current research in natural sciences is based on methods from the 60's which were deemed unfeasible but are now used because raw comp. power is available which makes them feasible (and the methods are easy to comprehend).
– Nox
17 hours ago
add a comment |
What you are describing is not uncommon. In my field people still use methods developed 50 years ago. Some of these methods are still valid and have proven to be robust, some of these are flawed with known improvement, and some of these are down right logically inconsistent but people still use them because of inertia.
Whether using an outdated method is a critical flaw in a paper depends on many factors. But it eventually comes down to whether the flaw in the method invalidates the main conclusion. For example, if the main result is qualitative, and the improvement from the new method is incremental, then it's not a big deal. If the result is supported by multiple lines of evidence, then the fact that one of them is flawed is then less severe of a problem. If the method is known to fail in special cases and it is clear that the data do not fall into such cases, then it is also not a big concern.
Overall, for better or worse, people are going to be more forgiving if the newer method is not well known or the improvement is marginal.
What you are describing is not uncommon. In my field people still use methods developed 50 years ago. Some of these methods are still valid and have proven to be robust, some of these are flawed with known improvement, and some of these are down right logically inconsistent but people still use them because of inertia.
Whether using an outdated method is a critical flaw in a paper depends on many factors. But it eventually comes down to whether the flaw in the method invalidates the main conclusion. For example, if the main result is qualitative, and the improvement from the new method is incremental, then it's not a big deal. If the result is supported by multiple lines of evidence, then the fact that one of them is flawed is then less severe of a problem. If the method is known to fail in special cases and it is clear that the data do not fall into such cases, then it is also not a big concern.
Overall, for better or worse, people are going to be more forgiving if the newer method is not well known or the improvement is marginal.
answered yesterday
DrecateDrecate
5,30012242
5,30012242
7
This reminds me of an occasion where a university Vice Chancellor told a department head that they need to update their programme and not continue teaching concepts that are hundreds of years old... such as the methods developed by Newton, Fourier, Leibniz, etc.
– Mick
22 hours ago
1
Using 50-year-old methods that work well is not so bad as publishing papers that claim to have rediscovered them. I've seen that more than once in my own field. It's easy to "forgive" students for not accessing literature that was only ever published on physical paper, but referees should know better IMO.
– alephzero
20 hours ago
7
@alephzero You mean like this paper which invented integration in 1994?
– David Richerby
20 hours ago
1
@Mick Please tell me you're joking. OT: The validity of research is not necessarily dictated by the up-to-dateness of the methods. A lot of current research in natural sciences is based on methods from the 60's which were deemed unfeasible but are now used because raw comp. power is available which makes them feasible (and the methods are easy to comprehend).
– Nox
17 hours ago
add a comment |
7
This reminds me of an occasion where a university Vice Chancellor told a department head that they need to update their programme and not continue teaching concepts that are hundreds of years old... such as the methods developed by Newton, Fourier, Leibniz, etc.
– Mick
22 hours ago
1
Using 50-year-old methods that work well is not so bad as publishing papers that claim to have rediscovered them. I've seen that more than once in my own field. It's easy to "forgive" students for not accessing literature that was only ever published on physical paper, but referees should know better IMO.
– alephzero
20 hours ago
7
@alephzero You mean like this paper which invented integration in 1994?
– David Richerby
20 hours ago
1
@Mick Please tell me you're joking. OT: The validity of research is not necessarily dictated by the up-to-dateness of the methods. A lot of current research in natural sciences is based on methods from the 60's which were deemed unfeasible but are now used because raw comp. power is available which makes them feasible (and the methods are easy to comprehend).
– Nox
17 hours ago
7
7
This reminds me of an occasion where a university Vice Chancellor told a department head that they need to update their programme and not continue teaching concepts that are hundreds of years old... such as the methods developed by Newton, Fourier, Leibniz, etc.
– Mick
22 hours ago
This reminds me of an occasion where a university Vice Chancellor told a department head that they need to update their programme and not continue teaching concepts that are hundreds of years old... such as the methods developed by Newton, Fourier, Leibniz, etc.
– Mick
22 hours ago
1
1
Using 50-year-old methods that work well is not so bad as publishing papers that claim to have rediscovered them. I've seen that more than once in my own field. It's easy to "forgive" students for not accessing literature that was only ever published on physical paper, but referees should know better IMO.
– alephzero
20 hours ago
Using 50-year-old methods that work well is not so bad as publishing papers that claim to have rediscovered them. I've seen that more than once in my own field. It's easy to "forgive" students for not accessing literature that was only ever published on physical paper, but referees should know better IMO.
– alephzero
20 hours ago
7
7
@alephzero You mean like this paper which invented integration in 1994?
– David Richerby
20 hours ago
@alephzero You mean like this paper which invented integration in 1994?
– David Richerby
20 hours ago
1
1
@Mick Please tell me you're joking. OT: The validity of research is not necessarily dictated by the up-to-dateness of the methods. A lot of current research in natural sciences is based on methods from the 60's which were deemed unfeasible but are now used because raw comp. power is available which makes them feasible (and the methods are easy to comprehend).
– Nox
17 hours ago
@Mick Please tell me you're joking. OT: The validity of research is not necessarily dictated by the up-to-dateness of the methods. A lot of current research in natural sciences is based on methods from the 60's which were deemed unfeasible but are now used because raw comp. power is available which makes them feasible (and the methods are easy to comprehend).
– Nox
17 hours ago
add a comment |
I'm currently ... doing a referee report on a paper... [Author did X] Is this acceptable?
You're the referee, so you tell us!
As a referee you have the authority to use your discretion here and decide what kind of recommendation you want to give to the editor. You have identified that the authors use an outdated method of analysis that has some problems highlighted in later literature. You should point this out in your review, and you will then need to decide how big of an issue this is. Is the old method sufficiently poor that the method should be revised to the improved method from 1998? If so then perhaps a revise and resubmit might be appropriate (assuming other aspects of the paper are okay).
add a comment |
I'm currently ... doing a referee report on a paper... [Author did X] Is this acceptable?
You're the referee, so you tell us!
As a referee you have the authority to use your discretion here and decide what kind of recommendation you want to give to the editor. You have identified that the authors use an outdated method of analysis that has some problems highlighted in later literature. You should point this out in your review, and you will then need to decide how big of an issue this is. Is the old method sufficiently poor that the method should be revised to the improved method from 1998? If so then perhaps a revise and resubmit might be appropriate (assuming other aspects of the paper are okay).
add a comment |
I'm currently ... doing a referee report on a paper... [Author did X] Is this acceptable?
You're the referee, so you tell us!
As a referee you have the authority to use your discretion here and decide what kind of recommendation you want to give to the editor. You have identified that the authors use an outdated method of analysis that has some problems highlighted in later literature. You should point this out in your review, and you will then need to decide how big of an issue this is. Is the old method sufficiently poor that the method should be revised to the improved method from 1998? If so then perhaps a revise and resubmit might be appropriate (assuming other aspects of the paper are okay).
I'm currently ... doing a referee report on a paper... [Author did X] Is this acceptable?
You're the referee, so you tell us!
As a referee you have the authority to use your discretion here and decide what kind of recommendation you want to give to the editor. You have identified that the authors use an outdated method of analysis that has some problems highlighted in later literature. You should point this out in your review, and you will then need to decide how big of an issue this is. Is the old method sufficiently poor that the method should be revised to the improved method from 1998? If so then perhaps a revise and resubmit might be appropriate (assuming other aspects of the paper are okay).
edited 9 hours ago
answered 23 hours ago
BenBen
13.7k33462
13.7k33462
add a comment |
add a comment |
This got me thinking about how exactly those on the cutting edge of
research seem to lag behind by over a decade (or more) in method and
still manage to get published.
This is not at all uncommon. It happens to many well-known techniques too.
Symbolic execution was invented in 1976. But it had been dead for decades until being resurrected around 2005 (thanks to significant advances in constraint solving). Now, it is popular, used in Google, Microsoft, NASA etc. All winning teams in DARPA Cyber Grand Challenge used it, the top team was bought by the Pentagon. What a comeback.
Similar story about neural network, it was crashed to dead by SVM (with kernel methods) years ago. It is now resurrected with a new fancy name: deep learning.
I am not sure if I understand this correctly. Your examples read more like "somebody rediscovered old methods and explained why they are better than the new ones", while the question reads to me as "somebody used an old methods while the researchers in the field know the new method is better and provided no justification, as if they did not hear about the new methods". Am I wrong?
– holla
10 hours ago
add a comment |
This got me thinking about how exactly those on the cutting edge of
research seem to lag behind by over a decade (or more) in method and
still manage to get published.
This is not at all uncommon. It happens to many well-known techniques too.
Symbolic execution was invented in 1976. But it had been dead for decades until being resurrected around 2005 (thanks to significant advances in constraint solving). Now, it is popular, used in Google, Microsoft, NASA etc. All winning teams in DARPA Cyber Grand Challenge used it, the top team was bought by the Pentagon. What a comeback.
Similar story about neural network, it was crashed to dead by SVM (with kernel methods) years ago. It is now resurrected with a new fancy name: deep learning.
I am not sure if I understand this correctly. Your examples read more like "somebody rediscovered old methods and explained why they are better than the new ones", while the question reads to me as "somebody used an old methods while the researchers in the field know the new method is better and provided no justification, as if they did not hear about the new methods". Am I wrong?
– holla
10 hours ago
add a comment |
This got me thinking about how exactly those on the cutting edge of
research seem to lag behind by over a decade (or more) in method and
still manage to get published.
This is not at all uncommon. It happens to many well-known techniques too.
Symbolic execution was invented in 1976. But it had been dead for decades until being resurrected around 2005 (thanks to significant advances in constraint solving). Now, it is popular, used in Google, Microsoft, NASA etc. All winning teams in DARPA Cyber Grand Challenge used it, the top team was bought by the Pentagon. What a comeback.
Similar story about neural network, it was crashed to dead by SVM (with kernel methods) years ago. It is now resurrected with a new fancy name: deep learning.
This got me thinking about how exactly those on the cutting edge of
research seem to lag behind by over a decade (or more) in method and
still manage to get published.
This is not at all uncommon. It happens to many well-known techniques too.
Symbolic execution was invented in 1976. But it had been dead for decades until being resurrected around 2005 (thanks to significant advances in constraint solving). Now, it is popular, used in Google, Microsoft, NASA etc. All winning teams in DARPA Cyber Grand Challenge used it, the top team was bought by the Pentagon. What a comeback.
Similar story about neural network, it was crashed to dead by SVM (with kernel methods) years ago. It is now resurrected with a new fancy name: deep learning.
answered 10 hours ago
qspqsp
12.1k83270
12.1k83270
I am not sure if I understand this correctly. Your examples read more like "somebody rediscovered old methods and explained why they are better than the new ones", while the question reads to me as "somebody used an old methods while the researchers in the field know the new method is better and provided no justification, as if they did not hear about the new methods". Am I wrong?
– holla
10 hours ago
add a comment |
I am not sure if I understand this correctly. Your examples read more like "somebody rediscovered old methods and explained why they are better than the new ones", while the question reads to me as "somebody used an old methods while the researchers in the field know the new method is better and provided no justification, as if they did not hear about the new methods". Am I wrong?
– holla
10 hours ago
I am not sure if I understand this correctly. Your examples read more like "somebody rediscovered old methods and explained why they are better than the new ones", while the question reads to me as "somebody used an old methods while the researchers in the field know the new method is better and provided no justification, as if they did not hear about the new methods". Am I wrong?
– holla
10 hours ago
I am not sure if I understand this correctly. Your examples read more like "somebody rediscovered old methods and explained why they are better than the new ones", while the question reads to me as "somebody used an old methods while the researchers in the field know the new method is better and provided no justification, as if they did not hear about the new methods". Am I wrong?
– holla
10 hours ago
add a comment |
I would say that in it's own right, that a better method exists does not, and should not, invalidate research.
It might be worth noting that if: a better method exists, has been used, and provides strictly better results that an outdated method does little or nothing to improve upon, then that's a different story.
To reiterate, I would be very uncomfortable citing "could have done better", on it's own, as a rebuttal.
For what its worth, my field mostly involves computational modelling and new methods are a frequent occurrence. The entire field only ever publishing with the latest and greatest methods would be almost inconceivable, and perhaps that effects my opinion more than it should in other fields.
In this generality, I disagree. While using the old, problematic method is still "research", it doesn't have to be "interesting research", i.e. publishable.
– holla
12 hours ago
add a comment |
I would say that in it's own right, that a better method exists does not, and should not, invalidate research.
It might be worth noting that if: a better method exists, has been used, and provides strictly better results that an outdated method does little or nothing to improve upon, then that's a different story.
To reiterate, I would be very uncomfortable citing "could have done better", on it's own, as a rebuttal.
For what its worth, my field mostly involves computational modelling and new methods are a frequent occurrence. The entire field only ever publishing with the latest and greatest methods would be almost inconceivable, and perhaps that effects my opinion more than it should in other fields.
In this generality, I disagree. While using the old, problematic method is still "research", it doesn't have to be "interesting research", i.e. publishable.
– holla
12 hours ago
add a comment |
I would say that in it's own right, that a better method exists does not, and should not, invalidate research.
It might be worth noting that if: a better method exists, has been used, and provides strictly better results that an outdated method does little or nothing to improve upon, then that's a different story.
To reiterate, I would be very uncomfortable citing "could have done better", on it's own, as a rebuttal.
For what its worth, my field mostly involves computational modelling and new methods are a frequent occurrence. The entire field only ever publishing with the latest and greatest methods would be almost inconceivable, and perhaps that effects my opinion more than it should in other fields.
I would say that in it's own right, that a better method exists does not, and should not, invalidate research.
It might be worth noting that if: a better method exists, has been used, and provides strictly better results that an outdated method does little or nothing to improve upon, then that's a different story.
To reiterate, I would be very uncomfortable citing "could have done better", on it's own, as a rebuttal.
For what its worth, my field mostly involves computational modelling and new methods are a frequent occurrence. The entire field only ever publishing with the latest and greatest methods would be almost inconceivable, and perhaps that effects my opinion more than it should in other fields.
answered 13 hours ago
ANoneANone
1804
1804
In this generality, I disagree. While using the old, problematic method is still "research", it doesn't have to be "interesting research", i.e. publishable.
– holla
12 hours ago
add a comment |
In this generality, I disagree. While using the old, problematic method is still "research", it doesn't have to be "interesting research", i.e. publishable.
– holla
12 hours ago
In this generality, I disagree. While using the old, problematic method is still "research", it doesn't have to be "interesting research", i.e. publishable.
– holla
12 hours ago
In this generality, I disagree. While using the old, problematic method is still "research", it doesn't have to be "interesting research", i.e. publishable.
– holla
12 hours ago
add a comment |
Thanks for contributing an answer to Academia Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2facademia.stackexchange.com%2fquestions%2f125685%2fpublishing-research-using-outdated-methods%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
9
Oh, you meant economics? I thought you were talking about science and research.
– Eric Duminil
14 hours ago
1
It's not clear to me why the method is "outdated" in this specific case. Could you elaborate on this? So far you only mention that it is from 1998, that means it is 21 years old, but that does not mean that it is outdated.
– Spectrosaurus
11 hours ago
1
I remember at the same stage of my career being shocked at the mediocrity of a lot of what gets published. I'm no longer shocked: convinced, instead, that 90% of academic researchers could be more usefully employed doing something other than academic research.
– Michael Kay
9 hours ago
1
@MichaelKay 95% of research is crap. The reward from the 5% makes it worth it. Basic research is always high-risk high-reward, and that's why the commercial sector can't even come close to doing it. There's no way of telling whether something crap or game-changing until its done, or even many decades after its done. Somebody published obviously wrong data with a wrong explanation, and 16 years later, got the Nobel Prize.
– user71659
9 hours ago
2
@MichaelKay: I think the "publish or perish" mentality plays a large role. Academia would be a better place if people only wrote papers when they have something interesting to say.
– Eric Duminil
9 hours ago