Advertisement · 728 × 90

Posts by Chris Brockett

Heatmap showing phonological distances between Indo-European languages. Darker means more similar.

Heatmap showing phonological distances between Indo-European languages. Darker means more similar.

Phonological distances for linguistic typology and the origin of Indo-European languages

a "... significant correlation between phonological distance and geographic proximity"

arxiv.org/abs/2604.11565

8 hours ago 5 2 0 0
Hojoki by Kamo no Chomei, translated by Matthew Stavros

Hojoki by Kamo no Chomei, translated by Matthew Stavros

I very much enjoyed this translation of Hōjōki by Matthew Stavros, who takes an interesting approach by setting out the text in lines like poetry. I think this really works, bringing out rhythms, and I guess a greater focus on individual lines.

1 hour ago 3 1 1 0
Video

If they had just called this “The Entities” instead of “Large low-shear-velocity provinces”, kids would be more into Seismology.

(The two entities are named Tuzo and Jason).

1 day ago 27 6 1 1

Ah yes. The source of many battles with junior goverment officials back in my day. At least let the translator use italics.

1 day ago 1 0 0 0
Preview
Another large subduction earthquake off the coast of northern Japan... ...and another megaquake advisory is issued.

⚒️ 🧪

A M7.4 earthquake offshore northern Japan today led to a megaquake advisory: a warning that the risk of a M8+ earthquake is ten times higher than usual. What does that mean, and how does this event fit into the mosaic of earthquakes in this seismically active region?

1 day ago 42 20 1 0

Significant differences are indicated with 🐦‍⬛

1 day ago 193 34 2 1
Preview
Upending assumptions about learning, inspired by an AI phenomenon More than a century of psychological research has assumed that, given constrained resources for storing information, learning should happen through forgetting most things but hanging onto only key the...

For over a century, psychology has largely assumed that humans learn by simplifying experiences and retaining only key themes.

In a new paper, SFI’s Marina Dubova and co-author Sabina Sloman explore whether humans can learn through excess capacity, remembering more and generalizing better.

1 day ago 16 4 0 1

Love by the waters’ edge.

みしま江や玉江のまこも水がくれてめにしみえねばかる人もなし

In Mishima Inlet,
So fair, the wild-rice
Is hidden in the waters,
Unseen by any eyes,
There’s no one to reap it…

Kinkai wakashū 547
#wakapoem #wakapoetry #poem #love #Japan #lovepoem #和歌 #恋 #恋歌

1 day ago 4 1 0 0
Preview
The Inside Story of Five Days That Remade the Supreme Court

Everyone should read this description of how the Roberts Court abused the Supreme Courts own rules and traditions to turn the previously rarely used "shadow docket" into a tool for legislating a right wing agenda from the bench.

These "interim rulings" aren't interim; they are policy. Gift article.

2 days ago 506 272 18 16
Getting Past Past-Tense
[ANNs] are not perfect: they are not really explainable, they are not
pliable, i.e., they cannot be easily modified to correct any errors
observed, and they are not efficient due to the overhead of decoding. In
contrast, rule-based methods are more transparent to subject matter
experts; they are amenable to having a human in the loop through
intervention, manipulation and incorporation of domain knowledge;
and further the resulting systems tend to be lightweight and fast.
(Chiticariu et al. 2023, p. iii)
In what is known in the literature as the past-tense debate (e.g.,
Elman et al., 1996; Pinker & Ullman, 2002), cognition and its
underpinning substrates were discussed in terms of whether hard-
wired capacities, such as grammatical rules for English past-tense
formation, are encoded in the genes or otherwise without learning.
Furthermore, claims were made about connectionist systems, such
as, ANN “models cannot deal with languages such as Hebrew,
where regular and irregular nouns are intermingled in the same
phonological neighborhoods” (Pinker & Ullman, 2002, p. 459).
While it may have been true for models at the time that certain data
sets were unlearnable, or specific nondeep ANNs had limited
learning abilities due to their architecture or training set or regimen,
this both does not hold in the present day for certain data sets
(discussed below) and continues to hold in the sense that there are
data sets that are inaccessible to modeling endeavors using ANNs
(see proof in van Rooij et al., 2024). Work such as Zhang et al.
(2016, 2017) can serve to neutralize the claim that ANNs might
struggle with certain unstructured data sets, for example, “where
regular and irregular nouns are intermingled” (Pinker & Ullman,
2002, p. 459), by demonstrating that ANNs can learn utterly random
mappings between inputs and outputs. Of course, such a finding
about ANNs is also problematic to C-connectionists, who propose
that in many cases similar input–output…

Getting Past Past-Tense [ANNs] are not perfect: they are not really explainable, they are not pliable, i.e., they cannot be easily modified to correct any errors observed, and they are not efficient due to the overhead of decoding. In contrast, rule-based methods are more transparent to subject matter experts; they are amenable to having a human in the loop through intervention, manipulation and incorporation of domain knowledge; and further the resulting systems tend to be lightweight and fast. (Chiticariu et al. 2023, p. iii) In what is known in the literature as the past-tense debate (e.g., Elman et al., 1996; Pinker & Ullman, 2002), cognition and its underpinning substrates were discussed in terms of whether hard- wired capacities, such as grammatical rules for English past-tense formation, are encoded in the genes or otherwise without learning. Furthermore, claims were made about connectionist systems, such as, ANN “models cannot deal with languages such as Hebrew, where regular and irregular nouns are intermingled in the same phonological neighborhoods” (Pinker & Ullman, 2002, p. 459). While it may have been true for models at the time that certain data sets were unlearnable, or specific nondeep ANNs had limited learning abilities due to their architecture or training set or regimen, this both does not hold in the present day for certain data sets (discussed below) and continues to hold in the sense that there are data sets that are inaccessible to modeling endeavors using ANNs (see proof in van Rooij et al., 2024). Work such as Zhang et al. (2016, 2017) can serve to neutralize the claim that ANNs might struggle with certain unstructured data sets, for example, “where regular and irregular nouns are intermingled” (Pinker & Ullman, 2002, p. 459), by demonstrating that ANNs can learn utterly random mappings between inputs and outputs. Of course, such a finding about ANNs is also problematic to C-connectionists, who propose that in many cases similar input–output…

universal statistical approximation technique rather than a source of
empirical predictions” (Pinker & Ullman, 2002, p. 474). This is
perhaps prescient; compare this to the Goal row in Table 1. The
reality is complex because it is both the case that ANNs can learn an
infinite set of impressive input–output mappings—hence all the
hype—but it is not the case, and formally so, that they can learn any
such mapping (van Rooij et al., 2024). We unpack this below.
Rehashing the past-tense debate is not useful (for our purposes),
but learning from the mistakes and pitfalls of past rhetoric is useful
to the practitioners who wish to carry out connectionist modeling.
On the one hand, it may not come as a surprise to some that even
at the birth of M-connectionism (circa 2010; Table 1) and to this
day, the past-tense “veritable brouhaha” (Kirov & Cotterell, 2018)
was and is discussed by practitioners (e.g. Corkery et al., 2019;
Kohli et al., 2020; X. Ma & Gao, 2022; Oh et al., 2011; Seidenberg
& Plaut, 2014; Westermann & Ruh, 2012).
On the other hand, ANNs, on the cusp of M-connectionism, are
far from their days of being framed as flawed for being unable to
compute XOR. They are now seemingly impervious to critique and
in fact an old theoretical weakness is now coopted, reframed as a
strength—these models are now upgraded to universal function
approximators:

universal statistical approximation technique rather than a source of empirical predictions” (Pinker & Ullman, 2002, p. 474). This is perhaps prescient; compare this to the Goal row in Table 1. The reality is complex because it is both the case that ANNs can learn an infinite set of impressive input–output mappings—hence all the hype—but it is not the case, and formally so, that they can learn any such mapping (van Rooij et al., 2024). We unpack this below. Rehashing the past-tense debate is not useful (for our purposes), but learning from the mistakes and pitfalls of past rhetoric is useful to the practitioners who wish to carry out connectionist modeling. On the one hand, it may not come as a surprise to some that even at the birth of M-connectionism (circa 2010; Table 1) and to this day, the past-tense “veritable brouhaha” (Kirov & Cotterell, 2018) was and is discussed by practitioners (e.g. Corkery et al., 2019; Kohli et al., 2020; X. Ma & Gao, 2022; Oh et al., 2011; Seidenberg & Plaut, 2014; Westermann & Ruh, 2012). On the other hand, ANNs, on the cusp of M-connectionism, are far from their days of being framed as flawed for being unable to compute XOR. They are now seemingly impervious to critique and in fact an old theoretical weakness is now coopted, reframed as a strength—these models are now upgraded to universal function approximators:

Notably, these statements do not follow one way or another. If a
model is indeed a universal approximator for any function, why
would scientists need to “show that neural networks could reproduce
the gamut of psychological phenomena”? On the contrary, this is
given if they are indeed so powerful (hence the critique above by
Pinker & Ullman, 2002). To analyze this properly, as many mis-
communications abound with respect to this period (Olazaran, 1996;
Schmidhuber, 2015), what is proven by results such as Cybenko
(1989), Hornik (1991), and Hornik et al. (1989) are not that ANNs
can find a function approximation for any input–output mapping, but
that in principle a model that looks like an ANN, that is could be
built up of ANN components, can stand in for any function from a
given class of functions.
First, this has nothing to do with backpropagation, as the learning
algorithm is not implicated in the universal approximation proofs
cited (Cybenko, 1989; Hornik, 1991; Hornik et al., 1989)—only
relevant is the idea of multiple hidden unit layers, which was known
at the time of the perceptrons controversy and proponents repeated

Notably, these statements do not follow one way or another. If a model is indeed a universal approximator for any function, why would scientists need to “show that neural networks could reproduce the gamut of psychological phenomena”? On the contrary, this is given if they are indeed so powerful (hence the critique above by Pinker & Ullman, 2002). To analyze this properly, as many mis- communications abound with respect to this period (Olazaran, 1996; Schmidhuber, 2015), what is proven by results such as Cybenko (1989), Hornik (1991), and Hornik et al. (1989) are not that ANNs can find a function approximation for any input–output mapping, but that in principle a model that looks like an ANN, that is could be built up of ANN components, can stand in for any function from a given class of functions. First, this has nothing to do with backpropagation, as the learning algorithm is not implicated in the universal approximation proofs cited (Cybenko, 1989; Hornik, 1991; Hornik et al., 1989)—only relevant is the idea of multiple hidden unit layers, which was known at the time of the perceptrons controversy and proponents repeated

ah you'll like this section of my paper here...

doi.org/10.1037/rev0...
pdf: olivia.science/doc/GuestMar...

3 weeks ago 14 4 2 0
Advertisement

> The controversy associated with the statement “Ada Lovelace was the first computer programmer” reveals more about modern attitudes towards women [than her] achievements. [Her 1843 algorithm] was so advanced, that it was still utilised in record-breaking computation of Bernoulli numbers in 2008.

2 days ago 427 148 10 4

China-related job ads:

TT
🔸Literature & Culture: University of Virginia

non-TT
🔸Literature & Culture: Hong Kong University of Science and Technology (x2)
🔸Political Science: University of Oxford (China & Middle East)

N/A
🔸Library Services: Library of Congress

2 days ago 1 1 1 0

East Asia/Asia-related job ads:

postdoc
🔸Law: National University of Singapore (x2)

N/A
🔸Admin: Syracuse University

2 days ago 2 1 1 0

Japan-related job ads:

TT
🔹Lit & Culture: National Taiwan Uni (x2)

non-TT
🔹East Asian Studies: Nabunken (x2)
🔹History: Trinity College Dublin
🔹Lang: Marshall Uni; Kyoto Sangyo Uni; Japan Foundation, Saitama
🔹Art Hist: Ecole Pratique des Hautes Études

N/A
🔹Museums: National Museum of Asian Art

2 days ago 3 1 1 0
A screencap of the introductory text for the webpage "East Asia-related Job market Data in Progress (2025-2026)," which showcases job data information in East Asian Studies

A screencap of the introductory text for the webpage "East Asia-related Job market Data in Progress (2025-2026)," which showcases job data information in East Asian Studies

18 new job ads in East Asian Studies for 2025-2026 since last week! See details of the last couple week's postings below or visit the filter database to search postings. 🌏📊 Now 915 entries. prcurtis.com/projects/job...

2 days ago 2 1 1 0
Post image

"She’s been a court interpreter for over 20 years, the only one licensed in Texas for Hindi, Punjabi, or Urdu. Her language skills are requested nationwide..."

"One of her children recently enlisted in the military...

www.texasobserver.org/immigration-...

3 days ago 11081 5559 582 361
Post image

My forthcoming book, “The Fix,” dissects the mob-like tactics of the Trump regime and proposes ideas to reclaim our democracy. Available now for pre-order here.
www.barbaramcquade.com

1 week ago 994 232 19 18

Current hostility (on the part of politicians and institutional leaders alike) to even looking at the past is strikingly blunt. It touches many disciplines, but denying history is the key issue.

The backlash against Bouie’s piece on Enlightenment and then the 1619 Project were public turning points

3 days ago 502 162 5 4

What is striking to me about these is that the Justices candidly admit they are imposing their will without understanding the merits of the case.

3 days ago 578 155 17 7

Roberts the institutionalist has solidified his superlegislature.

3 days ago 23 11 2 0
Advertisement

Update: 1200 years of climate data now safe with next caretaker!

4 days ago 27 14 0 0
This is a sunfish in a Japanese aquarium. It became lonely after visitors stopped during renovations and refused to eat and rubbed its body against the tank walls showing stress, then started eating again within a day after staff placed human face cutouts near its tank.

This is a sunfish in a Japanese aquarium. It became lonely after visitors stopped during renovations and refused to eat and rubbed its body against the tank walls showing stress, then started eating again within a day after staff placed human face cutouts near its tank.

I love Mola Mola fish 🐟

4 days ago 440 50 19 4

If your toaster starts working like this website today, report it to us on SaferProducts.gov.

5 days ago 3082 586 46 36

Another unforgettable entry in the list of Wikipedia edit wars is the discussion over the spelling of yogurt/yoghurt. The battle lasted almost a decade. The article is now titled "Yogurt", with variant spellings listed in the lead sentence.

5 days ago 42 2 2 1
Video

Let's be real: do you think the article about arachnophobia should or shouldn't contain a picture of a real spider?

Explore Wikipedia edit wars ➡️ w.wiki/DuYM

5 days ago 81 10 4 1
Cover of George Takei's autobiographical graphic novel They Called Us Enemy showing a line of people entering an internment camp as a young George looks back, breaking the fourth wall.

Cover of George Takei's autobiographical graphic novel They Called Us Enemy showing a line of people entering an internment camp as a young George looks back, breaking the fourth wall.

Friends of the Sacramento Library is offering these free for AAPI month

I have read so much work like this, but this might be the single most painful one because, well, everything is local.

I drive through Florin often (It's now South Sacramento), SF is where my mom lives, his dad was my age when

6 days ago 4027 1021 2 0
Post image

⚡🌪️⚡Thu, Apr 16
Yesterday's thunderstorm, which produced an impressive waterspout btwn Bainbridge & downtown Seattle, made for dynamic whale watching! The T46Bs moved up & down Possession Sound, hopefully they spent another night.

📸 Little T46B2B Takaya. By Elise Lipinski 4/10 (frame grab).

#psws

5 days ago 2427 275 39 9

In which I am reminded that "bokeh" is a legitimate English word borrowed from Japanese ...

en.wikipedia.org/wiki/Bokeh..

5 days ago 9 1 0 0
Advertisement

My morning PSA: TurboTax is a parallel structure. An attempt to privatize the IRS, dismantle its capacity to audit & prosecute the wealthy, & charge capta rents to the rest of us.

5 days ago 23 7 0 0

at this point anyone arguing about how they aren't concentration camps needs to just be ignored and discussion with them moved on from

6 days ago 249 63 3 2