A dollhouse
Sharing something completely unrelated to philosophy or science.
Thinking of picking up a hobby before retirement - feels good to do something with hands in all 'this', you know.
A dollhouse
Sharing something completely unrelated to philosophy or science.
Thinking of picking up a hobby before retirement - feels good to do something with hands in all 'this', you know.
Thanks! It is sometimes nice to aim for zero citations:)
After 3 years, this paper is finally out OA! We engage with the idea (that many entertain, if not endorse) that science can progress (or be fixed) thanks to its incentive structures alone - no explanatory role left to truth-seeking, curiosity, or scientific integrity. doi.org/10.1007/s112...
If requiring that some scientists autonomously care about truth, reliability and the like sounds 'unrealistic', I guess we should remember that there is no guarantee that the scientific profession can continue without scientific progress - the former serves the latter, not the other way around.
A scientific community without (some critical mass of) virtuous scientists doesn't have the means to maintain this normative alignment. This idea is neither naive nor conservative. It has just been sidelined for being a fantasy (Probably due to the widespread allure of neoliberal fantasies).
They provide the negative feedback mechanism that maintains the alignment between two sets of norms - the epistemic norms of science and the social norms of the scientific profession (the so-called incentives). The latter cannot do the job of the former, via some invisible hand mechanism.
We argue that it is time that we abandon this idea, which we dub 'radicalism about intellectual virtue'. Although the social system of science is well equipped to tolerate most honest error and even some misconduct, virtuous scientists play a critical role in any scientific community:
After 3 years, this paper is finally out OA! We engage with the idea (that many entertain, if not endorse) that science can progress (or be fixed) thanks to its incentive structures alone - no explanatory role left to truth-seeking, curiosity, or scientific integrity. doi.org/10.1007/s112...
I'll discuss a forthcoming paper about the intellectual virtue, incentives in science, and whether the latter can replace the former at #APACentral in February in Chicago. Happy to meet if anyone is and want to talk.
The Paul Meehl Graduate School Meta Research Symposium 2025 is on October 17. Keynote speakers are @uyguntunc.bsky.social and @lspitzer.bsky.social. The symposium is free to attend for everyone - also if you are not a PhD student. And will soon announce an extra workshop the afternoon of the 16th.
Yeah, obviously. But the important question is, can this be a winning strategy to create an alternative science? I highly doubt it. In the long run, selective application of rigor to bend facts is basically shooting yourself in the foot.
Setting aside the fact that the history of science is not neatly organisable under one methodological principle, this is not the right time to smuggle methodological anarchism. We need rigor more than ever, to be able to prevent wacky science from gaining ground and sidelining legitimate science.
We talk about this in one dedicated section (6). Here there are 2 questions that are conflated, one concerns error-control and the other evidential readiness. Values clearly apply to the second question but should not influence how we answer the first.
In our new paper with @mntunc.bsky.social
(philsci-archive.pitt.edu/id/eprint/25...) we reassess the Inductive Risk Argument (IRA) and its implications for the value-free ideal of science. We say that IRA's call for social value-encroachment in scientific inference is mistaken. Here's an overview:
Finally saw Aristotle's Lyceum. Not much to see, really. It was fun to imagine where he might have sat, though:)
But important to note is that convergent evidence can also be misleading in the way consensus can be misleading - if convergence is not robust, namely if the errors of different lines of inquiry are not independent. Convergent evidence then strenghtens bias.
I may have missed the whole context, but the term convergent evidence is more reflective of the inquiry process exactly because it does not misleadingly focus on the social aspect of it, which is a consequence not a cause. When ppl say science is a 'consensual' activity they miss its epistemology.
25/We believe these arguments undermine the epistemic insufficiency thesis. But even if one remains unconvinced, there are further critical problems with legitimate value encroachment, which we explore separately. We'll address those issues in the next thread.
#PhilSci #metasci
24/Such judgments are fallible—but they are aimed at epistemic values like coherence or accuracy, which are thought to be reliable guides to better hypotheses - while social values have no such property.
23/When scientists judge that one piece of evidence is more compelling than another, or that one method better tracks truth, they exercise epistemic discretion. So, rational disagreement in science is possible.
22/Science progresses because it remains open to challenge and revision. Letting social or ethical concerns lead to premature acceptance or rejection of scientific claims threatens this process, leading to epistemic stagnation masked as social legitimacy or moral progress.
21/The 2nd problem is that when values influence thresholds, they can force scientific questions to settle prematurely, foreclosing debate, testing, and revision. This undermines science’s core virtue: its openness to self-correction.
20/Adjusting the need for scientific rigor based on social value-judgments risks wishful thinking, because one would systematically lower the standards for pet hypotheses and increase them for alternatives.
19/Thus, fallibility presents an ever-present challenge for science—but it is a challenge best met by reinforcing epistemic norms, not by opening inference to moral or social encroachment. Otherwise, we risk 1) wishful thinking and 2)premature closure.
18/ On the contrary, the fallibility of scientific knowledge increases, rather than weakens, the need for epistemic discipline. If science is prone to error, introducing non-epistemic values into evidential reasoning only amplifies risks of distortion and misjudgment.
17/Instead, they merely reaffirm what **fallibilism** already implies: scientific reasoning involves uncertainty, vagueness, and the need for judgment. But good scientific judgment need not involve extra-scientific considerations.
16/Therefore, appeals to arbitrariness fail to establish IRA’s stronger claim: that non-epistemic values must systematically influence the internal inferential standards of science.
15/In fact, **fallibilism** about scientific knowledge already prepares us for this: scientific judgments can be uncertain and revisable, but they can still be disciplined by epistemic norms alone.
14/Vagueness simply means that **reasonable people may disagree** near the margins. It does not mean that there is no right answer, nor that decisions must be guided by external, non-epistemic considerations.
13/Similarly, scientific evidential thresholds ("sufficient evidence") can be vague but they are by no means arbitrary. They are guided by epistemic standards, while open to rational contestation, and they usually reflect a strong disciplinary consensus.