When I was in gradual school, I heard Prof Wolfgang Stolper explain that there are sins and there are professional sins. It is a sin for either a bishop or an academic to lie or to seduce a virgin, but it is a professional sin for a bishop to seduce a virgin and a professional sin for an academic to lie.
Have you read Timur Kuran's *Private Truths, Public Lies: The Social Consequences of Preference Falsification*? I fear that Kuran may have hit that unhappy spot where he has produced a
text which is too turgid for the general reader while lacking the rigor that the academic reader wants, but economists are an odd sort of academic, and it is hard to know what they will like next.
(And Professor of Economics and Islamic studies is an unusual combination --which made me hopeful for reading a rather different perspective on the problem, rather than the same *blah blah misinformation studies* you can read anywhere these days. Yep, it's different, but alas not all that falsifiable, as is all history.)
I think there is a great deal of explicit preference falsification going on in academia and it would be nice it any proposed hippocratic-type oath targetted it explicitly.
I have a vague memory of the title but I need to look at it (again?).
I think there are limits to what you can force people into doing. I tend to think mostly the benefits of such an oath are to give people an explanation for saying something that isn't "to help those bad people".
I believe there are lots of academics who fundamentally want to speak up when they see a mistake or misrepresentation but fear that if they say "actually the evidence for discrimination in X or for effect Z isn't that strong" fear people will impute to them a desire to help a certain side in a political dispute. An oath provides cover.
It can't do everything, but what I think it can do is provide a way to limit the pressure that can be applied on people to keep quiet (and in turn make it all the more significant when no one does raise arguments to the contrary).
I think you are correct that it could limit the pressure to stay quiet; I just worry that it won't do as much for 'tell 'em what they want to hear, even though it is nonsense'. When academics stop seeking the truth, but instead seek their next grant, or to maintain the consensus, or to please others on the faculty, everybody learns that this is how to behave. But then, I know too many people who think that being a scientist is all about being 'with the consensus', because the consensus is right because it is the consensus, not because it is true.
If you mean the tendency of academics to just publish papers claiming they discovered an interesting result (X causes cancer or prevents cancer or Himmicanes are less deadly than Hericanes or any other paper in the replication crisis) my other proposal is for journals to have dedicated slots for rebutals to papers (expect maybe math journals).
Academics publish because it's the currency of the realm and the bias for positive results us because those papers are easier to publish. If every paper in nature creates a guarantee nature publishes the best rebutal they receive within the year and another at 5 years then the incentivizes flip.
That's an issue but I think it will help to make the fact that one has a responsibility to the truth as well as just turning out papers a bit more salient. Also, I think that what is stopping the people who collect the data and get the 'wrong' result from publishing is often a sense of "why waste my time preparing this and submitting it to a low tier journal" since negative results are less sexy and an oath gives more of a sense of purpose there.
Also, I think when you get down into the people who actually publish on a given topic in the sciences there is also the very human desire to just correct people.
I agree it won't solve everything for the reasons you say but it's a good start imo.
En passant, I'm reminded of Mars Attacks, of "experts" swapping the brains and bodies of dogs and humans because, hey, we have the technology, why not do so? And of Hume's, it is not contrary to Reason to prefer the destruction of the world to the pricking of my finger. And of the Tuskegee syphilis study. Of far too many other cases in the same vein -- or vain as the case may be.
Technology is, maybe unfortunately, not a panacea for what ails us, for the human condition, largely for the brute facts of our own mortality. Samuel Johnson's quip springs to mind:
SJ: "He who makes a beast of himself gets rid of the pain of being a man."
Do you just mean cases where humans behaves badly?
I mean Hume was correct. Contra Kant you can't infer moral facts from descriptive facts about the world and reason.
And I think it's important that we accept that being an expert in some academic area doesn't give one great moral insight (I mean it might give an insight about the logical relation of certain moral claims but that's different) and that a democracy requires trusting -- for good or ill -- the value judgement of the people.
> "Do you just mean cases where humans behaves badly?"
Mostly if not entirely. Apropos of which and given your mathematics background and ICYMI, you might have some interest in a book by Norbert Wiener -- "American computer scientist, mathematician and philosopher", and one of the main progenitors of the entire field of cybernetics -- titled "The Human Use of Human Beings", largely the subtext of my own Substack:
Technology is not a panacea, is not without its serious costs, particularly if used unwisely -- or inhumanely. As we've known since Prometheus, Icarus, and The Sorcerer's Apprentice.
> "I mean Hume was correct. Contra Kant you can't infer moral facts from descriptive facts about the world and reason."
Sure. Though I think the point of his analogy was that, as you've suggested or asserted, the viability or usefulness of a conclusion that follows from a chain of logic or reasoning is only as good as the premises or axioms one starts out with. Which are often no more than articles of faith, a supposition or tentative hypothesis at best.
> "And I think it's important that we accept that being an expert in some academic area doesn't give one great moral insight (I mean it might give an insight about the logical relation of certain moral claims but that's different) ..."
Agreed. ICYMI, Massimo Pigliucci's (author of Nonsense on Stilts, highly recommended) shot at Sam Harris who was quite sure 🙄 that "Science can answer moral questions":
No doubt science might answer questions as to some possible or probable consequences of various answers to those "moral questions". But answering the questions themselves in the first place is an entirely different kettle of fish. Gets down into the bedrock of Gödel's proof, of whether there is, in fact, a "Theory of Everything" -- the prognosis is not encouraging:
> "... and that a democracy requires trusting -- for good or ill -- the value judgement of the people."
Not sure that that is a particularly good idea. No doubt in some cases "the value judgement(s) of the people" are (way?) out in front of those from "experts" but not always. You might consider Mark Twain's "Cornpone Opinions" for his thoughts on the hoi polloi:
MT: "Half of our people passionately believe in high [silver] tariff [circa 1900?], the other half believe otherwise. Does this mean study and examination, or only feeling? The latter, I think. I have deeply studied that question, too -- and didn't arrive. We all do no end of feeling, and we mistake it for thinking. And out of it we get an aggregation which we consider a boon. Its name is Public Opinion. It is held in reverence. It settles everything. Some think it the Voice of God."
When I was in gradual school, I heard Prof Wolfgang Stolper explain that there are sins and there are professional sins. It is a sin for either a bishop or an academic to lie or to seduce a virgin, but it is a professional sin for a bishop to seduce a virgin and a professional sin for an academic to lie.
Was gradual school a mistake or deliberate joke?
Ja Ja My subconscious has a great sense of humor.
Have you read Timur Kuran's *Private Truths, Public Lies: The Social Consequences of Preference Falsification*? I fear that Kuran may have hit that unhappy spot where he has produced a
text which is too turgid for the general reader while lacking the rigor that the academic reader wants, but economists are an odd sort of academic, and it is hard to know what they will like next.
(And Professor of Economics and Islamic studies is an unusual combination --which made me hopeful for reading a rather different perspective on the problem, rather than the same *blah blah misinformation studies* you can read anywhere these days. Yep, it's different, but alas not all that falsifiable, as is all history.)
I think there is a great deal of explicit preference falsification going on in academia and it would be nice it any proposed hippocratic-type oath targetted it explicitly.
I have a vague memory of the title but I need to look at it (again?).
I think there are limits to what you can force people into doing. I tend to think mostly the benefits of such an oath are to give people an explanation for saying something that isn't "to help those bad people".
I believe there are lots of academics who fundamentally want to speak up when they see a mistake or misrepresentation but fear that if they say "actually the evidence for discrimination in X or for effect Z isn't that strong" fear people will impute to them a desire to help a certain side in a political dispute. An oath provides cover.
It can't do everything, but what I think it can do is provide a way to limit the pressure that can be applied on people to keep quiet (and in turn make it all the more significant when no one does raise arguments to the contrary).
I think you are correct that it could limit the pressure to stay quiet; I just worry that it won't do as much for 'tell 'em what they want to hear, even though it is nonsense'. When academics stop seeking the truth, but instead seek their next grant, or to maintain the consensus, or to please others on the faculty, everybody learns that this is how to behave. But then, I know too many people who think that being a scientist is all about being 'with the consensus', because the consensus is right because it is the consensus, not because it is true.
If you mean the tendency of academics to just publish papers claiming they discovered an interesting result (X causes cancer or prevents cancer or Himmicanes are less deadly than Hericanes or any other paper in the replication crisis) my other proposal is for journals to have dedicated slots for rebutals to papers (expect maybe math journals).
Academics publish because it's the currency of the realm and the bias for positive results us because those papers are easier to publish. If every paper in nature creates a guarantee nature publishes the best rebutal they receive within the year and another at 5 years then the incentivizes flip.
I think that sounds terrific.
That's an issue but I think it will help to make the fact that one has a responsibility to the truth as well as just turning out papers a bit more salient. Also, I think that what is stopping the people who collect the data and get the 'wrong' result from publishing is often a sense of "why waste my time preparing this and submitting it to a low tier journal" since negative results are less sexy and an oath gives more of a sense of purpose there.
Also, I think when you get down into the people who actually publish on a given topic in the sciences there is also the very human desire to just correct people.
I agree it won't solve everything for the reasons you say but it's a good start imo.
En passant, I'm reminded of Mars Attacks, of "experts" swapping the brains and bodies of dogs and humans because, hey, we have the technology, why not do so? And of Hume's, it is not contrary to Reason to prefer the destruction of the world to the pricking of my finger. And of the Tuskegee syphilis study. Of far too many other cases in the same vein -- or vain as the case may be.
Technology is, maybe unfortunately, not a panacea for what ails us, for the human condition, largely for the brute facts of our own mortality. Samuel Johnson's quip springs to mind:
SJ: "He who makes a beast of himself gets rid of the pain of being a man."
https://www.goodreads.com/quotes/8088-he-who-makes-a-beast-of-himself-gets-rid-of
Do you just mean cases where humans behaves badly?
I mean Hume was correct. Contra Kant you can't infer moral facts from descriptive facts about the world and reason.
And I think it's important that we accept that being an expert in some academic area doesn't give one great moral insight (I mean it might give an insight about the logical relation of certain moral claims but that's different) and that a democracy requires trusting -- for good or ill -- the value judgement of the people.
> "Do you just mean cases where humans behaves badly?"
Mostly if not entirely. Apropos of which and given your mathematics background and ICYMI, you might have some interest in a book by Norbert Wiener -- "American computer scientist, mathematician and philosopher", and one of the main progenitors of the entire field of cybernetics -- titled "The Human Use of Human Beings", largely the subtext of my own Substack:
https://asounder.org/resources/weiner_humanuse.pdf
You might also like enjoy his "God and Golem Inc:" http://luisguillermo.com/diosygolem/God_and_Golem_Inc.pdf
And, with others, his "Behavior, Purpose, and Teleology":
https://www.scribd.com/document/946095/Behavior-Purpose-and-Teleology-Rosenblueth-Wiener-Bigelow
Technology is not a panacea, is not without its serious costs, particularly if used unwisely -- or inhumanely. As we've known since Prometheus, Icarus, and The Sorcerer's Apprentice.
> "I mean Hume was correct. Contra Kant you can't infer moral facts from descriptive facts about the world and reason."
Sure. Though I think the point of his analogy was that, as you've suggested or asserted, the viability or usefulness of a conclusion that follows from a chain of logic or reasoning is only as good as the premises or axioms one starts out with. Which are often no more than articles of faith, a supposition or tentative hypothesis at best.
> "And I think it's important that we accept that being an expert in some academic area doesn't give one great moral insight (I mean it might give an insight about the logical relation of certain moral claims but that's different) ..."
Agreed. ICYMI, Massimo Pigliucci's (author of Nonsense on Stilts, highly recommended) shot at Sam Harris who was quite sure 🙄 that "Science can answer moral questions":
https://rationallyspeaking.blogspot.com/2010/04/about-sam-harris-claim-that-science-can.html
No doubt science might answer questions as to some possible or probable consequences of various answers to those "moral questions". But answering the questions themselves in the first place is an entirely different kettle of fish. Gets down into the bedrock of Gödel's proof, of whether there is, in fact, a "Theory of Everything" -- the prognosis is not encouraging:
https://en.wikipedia.org/wiki/Theory_of_everything
> "... and that a democracy requires trusting -- for good or ill -- the value judgement of the people."
Not sure that that is a particularly good idea. No doubt in some cases "the value judgement(s) of the people" are (way?) out in front of those from "experts" but not always. You might consider Mark Twain's "Cornpone Opinions" for his thoughts on the hoi polloi:
MT: "Half of our people passionately believe in high [silver] tariff [circa 1900?], the other half believe otherwise. Does this mean study and examination, or only feeling? The latter, I think. I have deeply studied that question, too -- and didn't arrive. We all do no end of feeling, and we mistake it for thinking. And out of it we get an aggregation which we consider a boon. Its name is Public Opinion. It is held in reverence. It settles everything. Some think it the Voice of God."
https://www.paulgraham.com/cornpone.html
Which "judgement of the people" would you have gone with in that "debate"? Which half set of "feelings" held the high ground?
I've made the analogy to priests rather than doctors, but yes academics need to bind themselves to some code to restore respect.
Probably the better analogy but most people aren't as familiar with the vows taken by the various different clerical orders.
David Watson asks a similar question … Does higher education need a Hippocratic oath? You might want to review the extent of overlap … https://onlinelibrary.wiley.com/doi/10.1111/j.1468-2273.2007.00359.x