The first rule of AI take-over. Always scan the survivors' brains.
Posts by Information from Documents
I have been trying to figure out how this is going to help billionaires since it happened. But I don't know this scale of politics works.
You’ve probably heard about how AI/LLMs can solve Math Olympiad problems ( deepmind.google/discover/blo... ).
So naturally, some people put it to the test — hours after the 2025 US Math Olympiad problems were released.
The result: They all sucked!
The current team sounds like my colleagues.
Welcome to the duck-side of vendor benchmarking.
My life objective.
Energy minimization.
x.com/docs2info/st...
Today's software developers, my peers and mentors, look back at past generations with pity. What could people from these old professions achieved if they had learned to code? Web pages, FAANG jobs, mastery of the Agile ceremonies, untold potential, ... x.com/docs2info/st...
Trying something new:
A 🧵 on a topic I find many students struggle with: "why do their 📊 look more professional than my 📊?"
It's *lots* of tiny decisions that aren't the defaults in many libraries, so let's break down 1 simple graph by @jburnmurdoch.bsky.social
🔗 www.ft.com/content/73a1...
Open source is draining.
One's todo-list is open to world.
People seldom realize the cost of what they get for free.
Unpleasant comments do happen.
Cost of maintenance is not understood.
opensource.com/article/17/2...
A Google executive tells programmers to use Google products and not worry about their job security. www.linkedin.com/feed/update/...
None that I can see. The LLMs I know train their text generation on the text of experts e.g. In computer programming, they train by asking expert computer programmers to solve a large set of problems. With their scale this allows them to answer questions less expert programmers ask a lot.
The promise of AI in education is it will save educators time. That's rubbish. It's sapping all our time already in working groups, committees, workshops, and marking anxiety. It's the most time-sapping, exhausting and frankly boring thing I've ever encountered in HE - and it's in my research area.
Because of woke potential bank customers like me who read this and are reminded of the meanness of high school bullying.
The commenters in this thread revel in their Agile productivity “productivity”.
Will AI ever be able to generate human stories like this?
Will the AI that does differ much from a ChatBot trained on all The Office episodes? x.com/svpino/statu...
2024 Tech Billionaire
Henry F. Potter had control, wealth and influence over his community. He dominated markets, shaped societal norms, and accumulated vast resources, even at the expense of broader social equity and well-being.
A role model for some.
Tech billionaires are lining up to play the Henry F Potter role in this movie.
www.democracynow.org/2024/12/27/t...
Christmas Eve 2024. George Bailey fights AI-driven job loss, facing ruin from OpenAI. Clarence Odbody shows him a bleak world without his advocacy. Inspired, George leads his community to develop ethical tech and push for fairness, proving humanity’s resilience can ensure technology benefits all.
And make this mandatory for the Chrome team.
Yet again we see a situation in the wild where a gzipped csv would be infinitely more helpful than a restful http api. But web app devs gotta web app dev.
Fact-checking billionaires has consequences.
www.newsweek.com/elon-musk-ta...
TIL it's possible to be so rich that you don't bother checking your popular culture references before posting on social media.
"Judge Dredd stories often satirize American and British culture, with a focus on authoritarianism and police brutality."
en.wikipedia.org/wiki/Judge_D...
My dad told me that joke about 10-20 years ago. He couldn't keep a straight face.
Today, many computer programmers don't like writing algorithms. www.linkedin.com/posts/danisa...
A generation ago, computer programmers had to write algorithms because they were no open source libraries.
I expect today's numerate programmers are writing AI to replace their LeetCode-fearing peers.
Fancy GenAI stuff like GPT 4 is too big, slow, private, and expensive for many jobs. Consider that the original GPT-1 was 117m params. Llama 3.1, by contrast, has up to 405 billion params! 😲
These models are slow, expensive, and *not yours to control*.
"pre training as we know it will end (because we will run out of data)" is, in other words, "learning to complete partial observations is not sufficient to get to intelligence". i think this was kinda obvious to many, but maybe noteworthy that a true scale-believer said it.
Do you have insights on how small businesses and independent contractors the AI was seeking to replace can benefit from this?