Who would deny this world's transience, so clear to see?
Who does not know, though long we live, that death will surely be?
Who would deny three measures of cloth (the shroud) are the final legacy?
Who does not know this very grave is our true homeland.
Posts by Bahaeddin ERAVCI
Rough English translation:
O, people of wisdom, look upon our state,
This is, for certain, human's ultimate fate.
A troubled, ephemeral dominion is this earthly place,
Where every soul mistakes its hardship as some grace and comfort.
Started my NLP lectures today exploring the fascinating levels of natural language.
This slide features an interesting example: Turkish, using Greek letters (orthography) on a historic tombstone from Istanbul. A poetic and meaningful common lesson transcending barriers...
We still know very little about complexity theory and the emergent behaviors of self-organizing processes across micro-to-macro systems.
I guess, today’s highly compartmentalized mainstream scientific tradition hinders progress toward a holistic understanding.
The universe, in a baffling sense, creates local pockets of complexity (decreasing entropy through biological/physical self-organized structures) while relentlessly advancing toward a state of maximum global entropy.
www.quantamagazine.org/why-everythi...
Don's main distinction for a CS mentality:
- ability to jump very quickly between levels of abstraction, between a low level and a high level, almost unconsciously
- deal with non-uniform (he means mathematically dis-continuous, discrete IMO) structures
Don's main distinction for a CS mentality:
- ability to jump very quickly between levels of abstraction, between a low level and a high level, almost unconsciously
- deal with non-uniform (he means mathematically dis-continuous, discrete IMO) structures
Came across a book (actually a transcript of lectures at @mitofficial.bsky.social) from a CS legend Donald Knuth, the author of The Art of Computer Programming. Not nearly as popular as TAOCP.
Love the line "Computer God talks about God" in the foreword, we'll see where it leads...
VLDB is a good example with a monthly cycle and accepted papers getting published concurrently in PVLDB each month.
The origins of aesthetics is really fascinating. Why deem this scene utterly spectacular and even tie it to the "long-tailed mountain lady"? What mechanisms shaped this "taste" and how?
#Severance isn’t a typical tv show. It’s a sharp dive into philosophy of mind, probing identity, memory, and mind-body duality with surprising depth. Highly recommend...
There appears to be a striking correlation between ignorance on a topic and the confidence with which people make bold statements about it.
Can easily use this principle as a de-noising filter...
Self (and its counterpart the other) is a very handy abstraction to make the most of our limited processing power.
Illusion of free will is a beneficial yet erroneous causal explanation we created after we observed the self interacting with the other(s) for some time.
m.youtube.com/watch?v=_Ig9...
People with absolute no theoretic or practical knowledge/experience (not a single call to nvidia-smi) about deep learning seems to easily predict the future of AI.
Their self-proclaimed prophetic confidence -still- amazes me.
Incentives influence choices and mass choices create the Zeitgest.
Unrestricted social media ecosystem favors the quick and superficial, seducing everyone —from professors to everyday individuals— to contribute to content that appeals to our lower faculties, much like primal instincts.
Some reflections and insights after 1993 NIPS by Leo Breiman known for developing CART, bagging, and random forests.
Always find less formal writings of the pioneers more insightful.
Intersection of info theory and complexity theory has always been very interesting.
While entropy quantifies global uncertainty (potential information), observer-dependent entropy brings observer’s view (its world model) to define subjective uncertainty.
www.quantamagazine.org/what-is-entr...
AGI isn't around the corner and scaling auto-regressive LLMs won't get us AGI.
I argue, while AR-LLMs are great improvements, we need some very important paradigm shifts with lessons from the past.
open.substack.com/pub/beravci/...
The infamous event of "cultural generalization made by a keynote speaker" shows how bias and miss-generalization are hard problems even for humans (even if a MIT professor). So, we maybe more compassionate with LLMs trained on our data.
Hallucination, especially somehow grounded ones, is not a bug but rather a feature.
Artists, for centuries, have been the "hallucinators of human society" challenging the common sense, i.e. the common daily patterns in the society.
#NeurIPS and other major conferences should consider making presentations, at least important keynotes/highlights, publicly available.
I could easily make an argument with public fundings for research presented. Funding agencies can also support this for more open science.
Feels like the beginning of 1900s with huge discoveries each year but this time huge strides in tech.
Exciting to be a witness of the tech revolution ranging from AI to quantum compute...
blog.google/technology/r...
GPU poor man's home setup ready for a long night...
Great insights, thx @tiziano.bsky.social. Especially like it because its prospective and interventional experiment.
One question: Any observed/estimated bias because of the "install browser extension" constraint?
This saga reminds me of the access modifiers I teach in my Java OOP course.
Maybe we need something similar in the generative AI age:
- Public: Content accessible to both AI and humans
- Protected: Human-only consumable public content
Easier said than done with a lot of technicalities though...
Test-of-time awards are the **real impact** metrics akin to revolutionary science in
Kuhnian sense.
Congrats to @ian-goodfellow.bsky.social and Ilya with GANs and Seq2Seq.
blog.neurips.cc/2024/11/27/a...
Thx Roy, working on multimodal in medicine 👋
Another WW2 induced technology
:) Not an easy task to match 1kw magnetron power with a beamformer in 12cm wavelength (2.4GHz) region in the usual oven size. Maybe with a roomsized oven...
BTW, guessing some EE background Durk?
Yes, this is from the book but I think it is originally from one of Tenenbaum's previous papers on game engines for learning physics.