At smaller scales, without aliasing, a Discrete Fourier transform is literally this complex matrix where arrow orientation and color reflects a complex number's phase / angle, and it's length is the numbers magnitude. This funny looking thing.
Posts by Ven Popov
and even this version looks different from the copy I uploaded. Any form of compression wil absolutely struggle to know what to do with a fourier matrix, and something, just sometimes, things get pretty
Beyond doubt, my favorite accidental art while studying math ever. This is so mesmerizing. What is it? A discrete fourier transform matrix's battle with aliasing and en.wikipedia.org/wiki/Moir%C3.... Plus a little tinkering with my photos app color settings.
- late 50s: "Pain is the natural state of living beings"
- late 60s: ?
In your
- late 30s: many things hurt that didn't before. Not so sure about this aging thing.
- late 20s: sometimes some things hurt. Must be good pain!
- late teens: I am immortal (and of course will always be).
I also really like Nancy Cartwright's notion of nomological machines with respect to this issue
we take not only for granted, but as the only natural and obvious way to do the physics - so hard is it ingrained, that we have completely forgotten that it was never an obvious and natural way to look at the world - we only invented it like a minute ago in the grand scale of humanity's history
Frictionless ramps and balls falling through vacuum did not exist. The idea of them did not exist. But Galileo set forth the study of motion on the right path by dreaming up idealized scenarios that would reveal inherent behavior - then interpolating to the ideal from imperfect lab setups
The history of physics clearly supports your view (which I share). Galileo's greatest contribution to human thought is not any one particular discovery - but the invention of the idea that idealized thought experiments and their approximate physical implementation is the right way to do physics.
I often come back to this Kelvin quote when thinking about this issue (and touched upon at the end of my,paper here osf.io/preprints/ps...)
100% with you. I've been thinking about the general oft comparisons with physics a lot lately. Not only did lab conditions not exist before they were invented, but simple elegant and precise mathematical descriptions also needed to be invented. Physics isn't simple - hindsight is a danegrous prism
Nuance. This is not about what current models are capable of. If we only accept these questions as valid in the black-and-white case of either having "AGI" or not, it will be too late. They can be asked at every level of competency and agency, even partial ones - where they are most needed.
As progress in AI continues, the key policy questions become:
- What kinds of cognitive work can now be delegated?
- Under what degree of supervision?
- With what reliability?
- At what level of abstraction?
- With what consequences for education, institutions, agency, authorship and responsibility?
In the end, we each chart our own path, control our own learning experience, choose our own measures of success/satisfaction, contribute what we value ourselves. We don't need to patronize students as if they are not capable of choosing for themselves, as if they're easily duped by smoke & mirrors.
I wrote up something that's been in my head for a while: psychometric methods alone can't tell us what cognitive tasks and their indicators measure.
Correlating indicators across tasks is circular when constructs are defined by those same correlations.
osf.io/preprints/ps... 🧵1/3
I see why it may come of that way (and maybe that's what many mean...). But when I've made that comparison it's not about the economic value of grad students - which are grossly undervalued anyway - but about how I often interact with models. As knowledgeable colleagues that need guidance.
I remember when that Gomila paper came out. Still frustrated by it. I think its mostly non-sense
That’s lovely
I accidentally asked ChatGPT to "dummarize" a text for me, instead of summarize. Feels appropriate though, and it definitrly should be a word.
Ultimately, for me, it is not about efficiency or making science faster. Science is not a factory. It is about removing obstacles between me and aquiring understanding about things and problems and objects I care about.
Ultimately, for me, it is not about efficiency or making science faster. Science is not a factory. It is about removing obstacles between me and aquiring understanding about things and problems and objects I care about.
Those wiggles in cdfs? First I thought they were a simulation artefact. After some additional probing and simulations, we realized they are not, but are rather a part of the core model. This led me to understand what REM does in a way I never appreciated before
And then we launched into exploring the model. This is not mimicry. Far from it. It is the kind of contribution which if made by a human would rightly desrve to be recognized in any publication.
can be used on their own to provide a much much more performant monte carlo simulation of the original model by using some intermediate likelihood objects. It validated its own implementation (which as far as I can tell is a genuinely distinct from both existing ones). 5/
working on this, and eventually responded that it wasn't able to find a numerically stable implementation, which the authors themselves caution against - that the algorithms can be very fragile numerically. But wait. Chatgpt ON ITS OWN figured out that some of the techniques in the new paper
How could I trust the implementation? The author helpfully provide some relevant benchmarks which they used to check their implementation against the original monte carlo REM version. So I asked ChatGPT to also validate its implementation against these benchmarks. It spend about 20 minutes 3/
A colleague recently sent me this newer paper that uses fourier transform to derive an analytic solution to REM's likelihood. Looks like a nughtmare to implement. Perfect job for testing the AI limits, I thought, and ask ChatGPT 5.4 high reasoning effort model to implement it. 2/
One stunning recent example. Despite being a memory modeller, I had never myself implemented one of the most widely used models of recognition memory - REM. link.springer.com/article/10.3.... The model is not super complicated, but it also is difficult to simulate efficiently 1/
If I find a model interesting enough I eventually go hands-on, but even then so much of my exploration of models has become like having a conversation with a collaborator who does all the heavy-lifting code-wise and it frees me to spend so much more time thinking about what it all means