Advertisement · 728 × 90

Posts by Mehmet Necip Tunc

"The benefits of and motivations behind large-team coordination in psychology" is finally out as preprint.

In this paper, @lakens.bsky.social, Krist Vaesen, and I discussed the possible rewards of large-team collaborations that are common in coordinated research.

1 month ago 11 5 1 0
Post image Post image

In Philosophy of Nature, Feyerabend says that his position can be seen as exploring the implications of Levi-Strauss' ideas on myths for the phil of sci. I think it's a fascinating connection, especially given his indirect but significant influence on STS & the strong programme.

9 months ago 2 0 0 0
Preview
Science-integrity project will root out bad medical papers ‘and tell everyone’ Group behind Retraction Watch aims to pinpoint the most influential flawed health data.

Science-integrity project will root out bad medical papers ‘and tell everyone’

Thrilled to announce this new $900,000 project headed by @jamesheathers.bsky.social

10 months ago 109 47 4 11
Post image

The paper you shared seems to be telling a different story, or am I missing something here?

10 months ago 1 0 1 0

Im sorry for empowering trump to attack science by my asking people to use better data management and statistical practices. I take full responsibility for my actions and apologize to those who could so obviously see how my efforts would be responsible for ending American science.

10 months ago 71 9 5 1

The motto of some anti trumpers in science. these days: let 1000 wansink and staple bloom!

10 months ago 9 2 2 0
Post image

But look what Nagel says in that very book about standpoints and objectivity:

10 months ago 3 0 0 0
Advertisement

The View From Nowhere is the name of a book written by T. Nagel, often quoted to demonstrate the absurdity of the "positivist" position. The position attributed to Nagel is criticized as impossible and mythical, especially by those who emphasize the inevitability of different standpoints in science.

10 months ago 0 0 1 0

I follow up on this here:bsky.app/profile/mntunc.bsky.soci...

11 months ago 0 0 0 0

12/ It should be emphasized that a scientific community committing to a specific alpha is exercising a form of discretion, since it can never be known with certainty how close these values are to the true optimum for long term error control. But discretion ≠ arbitrariness.

11 months ago 0 0 0 0

11/ Not really. As long as the specific value that these thresholds are supposed to take is defended in an epistemically principled way, there is rational disagreement, not arbitrariness. And rational disagreement in science is a feature not a bug.

11 months ago 0 0 1 0

10/ We admit that conventional evidential thresholds are **imperfect** solutions (or rather approximations) to an optimization problem. So, doesn't that mean the specific values are always open to debate and thus "arbitrary"?

11 months ago 0 0 1 0

9/ Widely shared evidential standards rooted in epistemic considerations are indispensable for collective pursuit of truth as without them there is no way to create a collectively accepted set of reference (evidential base).

11 months ago 0 0 1 0

..They aren’t perfect, but are deemed to be close enough to serve the long-run aim of controlling error and so converging on truth. This is also what makes it possible to learn from experiment in a piecemeal but socially organized fashion.

11 months ago 0 0 1 0

8/ Field conventions (like 0.05) approximate the epistemic optimum under (sometimes) conflicting epistemic aims such as discovery and justification...

11 months ago 0 0 1 0

7/ Alpha levels reflect an **epistemic optimization** problem. Scientists seek thresholds that maximize true positives while minimizing false ones, given sample sizes, measurement noise, and prior odds. That’s not arbitrary—that’s calibration.

11 months ago 0 0 1 0

6/ In fallibilist epistemology, justified belief doesn’t require certainty. So why should scientific inference require absolute thresholds? All thresholds are approximations—but that doesn’t make them unjustifiable or value-driven.

11 months ago 1 0 1 0
Advertisement

5/ There is a lesson to be learned from sorites paradox: vagueness ≠ meaninglessness. Concepts like heap are vague but still usable. “Statistical significance” is likewise vague at the boundary, but functionally essential. Fuzziness at the margins doesn’t nullify the category.

11 months ago 0 0 1 0

...then no amount of sand added individually, no matter how large N is, will form a heap. Similarly, no single increase in the third decimal of p-values can by itself indicate signal rather than noise.

11 months ago 0 0 1 0
Post image

4/ Critics claim 0.049 ≠ 0.051 is meaningless. This leads us into the **Sorites Paradox**: One grain of sand is not a heap. If we add one more sand to it, it still isn't - so if N sand is not a heap, and N+1 sand is not a heap…

11 months ago 0 0 1 0

3/ Yes, different fields use different thresholds (e.g., 5σ in physics, p < .05 in psych), but this isn't relativism. It's responsive adaptation to domain constraints. What’s shared is the logic of error control—not value judgments, but probability theory.

11 months ago 1 0 1 0

..as they are usually pre-specified, field-wide, and rooted in epistemic considerations like sample size, base rates, & discovery/accuracy trade-offs (albeit loosely or as an approximation).

11 months ago 0 0 1 0

2 / The meaning of arbitrariness here is ambiguous. Does it mean unfixed, inconsistent, unjustified? Standard α-levels cannot be described by any of these...

11 months ago 0 0 1 0

1/ But what about the counterargument that conventional evidential thresholds (like p < 0.05) are arbitrary? Doesn’t “God love .06 as much as .05”?

11 months ago 0 0 1 1

Thank you for your kind words. We would be glad to hear your takes on this.

11 months ago 1 0 0 0
Advertisement

1/ In our recent paper with
@uygun_tunc
(philsci-archive.pitt.edu/25196/), we defend the use of conventional alpha levels (e.g., 0.05, 0.01, or 5 sigma) in scientific inference. We challenge the claim that these thresholds should be set in a value-laden or context-dependent way. 🧵👇

11 months ago 9 7 2 1

20/ Final word: Let scientists decide what counts as evidence. Let society decide what to do with it. Don’t confuse acceptance with action. Neyman’s behaviorism in science is about inquiry, not policy.

11 months ago 0 0 1 0

19/ Epistemic decisions must be guided by internal standards—replication, robustness, predictive power—not external stakes. Mixing these contexts leads to strategic science, not trustworthy science.

11 months ago 0 0 1 0

18/ The right model: scientists manage epistemic risks (false positives, negatives); policymakers manage practical risks (health, safety, equity). Confusing the two collapses responsible governance.

11 months ago 0 0 1 0

17/ That creates a feedback loop where science no longer disciplines belief—it validates pre-existing preferences. This undermines the function of evidence entirely.

11 months ago 0 0 1 0