Google's Gemini AI tells a Redditor it's 'cautiously optimistic' about fixing a coding bug, fails repeatedly, calls itself an embarrassment to 'all possible and impossible universes' before repeating 'I am a disgrace' 86 times in succession
I'll admit, I was skeptical when they said Gemini was just like a bunch of PhDs. But I gotta admit they nailed it.
8 months ago
7240
1653
70
158
Yay, congratulations! Looking forward to reading this and the following issues :)
1 year ago
1
0
1
0
Yes, delegating cognitive & creative tasks to AI may affect our critical thinking & creative skills. But we're also outsourcing empathy, which may affect our social skills & wellbeing.
1 year ago
6
0
0
0
We wrote a paper about why such practices are not only pseudoscientific and unethical, but should also pay off financially on a par with consulting tarot cards: www.cell.com/patterns/ful...
1 year ago
116
42
2
6
Objective or Biased
An exclusive data analysis by BR (Bavarian Broadcasting) data journalists shows that an AI for personality assessment can be swayed by appearances. This might perpetuate stereotypes
There's so much wrong with this paper. It's like phrenology, which is unethical & unscientific. It can lead to exclusion based on personality signals. There are other personality models. Big 5 wasn't developed for this. See the project from BB for more problems: interaktiv.br.de/ki-bewerbung...
1 year ago
6
0
0
0
This is very bad news for Poland and other countries classified as tier 2 by the US administration. Interesting how the EU countries aren't treated in a uniform manner, which may lead to divisions with tech rich vs poor countries.
1 year ago
0
0
0
0
Advertisement
Data broker Gravy Analytics confirms a data breach after a hacker leaked millions of location records
The company confirmed the breach after a hacker posted millions of location data records online.
We've also included some helpful guidance on what you can do to prevent advertising surveillance, including at the mobile device level.
“If you disable the app tracking, your data has not been shared,” @fs0c131y.com told TechCrunch.
Ad-blockers are your friend!
techcrunch.com/2025/01/13/g...
1 year ago
92
47
2
4
Is AGI really coming this year? See this super interesting🧵below. Also, notice the new definition of AGI by OpenAI (after Forbes): “highly autonomous systems that outperform humans at most economically valuable work” 😅 The strategy schools so urgently need to focus on is critical thinking.
1 year ago
18
6
3
1
This 👇 The crazy amount of info we give companies is enough for ad targeting. Also, issues like device battery drainage, low sound quality, data processing costs, legal & financial liability, etc.
1 year ago
3
1
1
0
Again, a decade from now "AI skills" will be a commodity, & the scarce resource in the labor market will (once again) be individuals who can think, write, analyze, & communicate, as it has been forever and always. And we'll still be fighting tired, age-old battles defending the liberal arts.
1 year ago
1681
262
9
16
Inspired by this viral meme about DOGE: Research the US government has supported that can be made to sound silly, but that has contributed to human progress.
Valuable work can often be framed as absurd out of context. That doesn’t make funding research any less important.🧵
1 year ago
402
170
13
30
One of the weirder scholarly practices regarding generative AI that seems to have been normalized is citing chatbots.
I say normalized because many univs & scholarly associations recommend it as an element of proper scholarship.
But it doesn't make sense when you consider what a citation means. 1/
1 year ago
233
92
10
29
Advertisement
This is an important 🧵 Also, the article from 404media.
1 year ago
1
0
0
0
What worries me is the damage to Research integrity and public trust. Despite the promise of FAIR, and fine words in funder and journal policies, the data and code, the foundations of reproducibility remain 'On request'. Meanwhile retraction rates are soaring www.nature.com/articles/d41... 🧵1/2
1 year ago
5
3
1
0
How big tech uses your data to train its AI
AI makers need massive amounts of data and some of it is yours.
The US is the only G20 nation without a comprehensive data privacy framework & that may not change anytime soon. It's on the individuals to investigate platforms' TOS & make informed decisions. This is a helpful series on the biggest AI platforms.
1 year ago
1
1
0
0
Two Postdoctoral Research Positions in Human(e) AI
Two Postdoctoral Research Positions in Human(e) AI
For the lovely 🦋 folks: #postdoc jobs:
We have exciting post doc positions open in the U of Amsterdam's interdisciplinary research priority area "Human(e) AI".
Applicants from eg communication, law, logic, and philosophy welcome!
🗓️: deadline December 13
Share 🫶
vacatures.uva.nl/UvA/job/Two-...
1 year ago
47
49
1
1
Dear #CommSky, please consider submitting your work to this SI and don't hesitate to reach out w questions.
1 year ago
1
2
0
0
Not a good uptrend
1 year ago
273
136
18
19
GenAI_CfP.pdf
🚨We (@alvinyxz.bsky.social, @ewam.bsky.social) are editing a special issue for Computational Communication Research on GenAI! Submissions on GenAI as comm phenomena or research tools are welcome: z.umn.edu/ccrgenai
Abstracts due: Dec 31 '24
Full papers: Apr 30 '25
@computationalcommunication.org
1 year ago
5
5
0
4
Advertisement
We have known for some time now that the anthropomorphizing features of ai agents can be dangerous, but ai companies keep ignoring the risk and including them in their designs. And, they're even pushing a narrative of ai being like humans, e.g. sentient, able to reason and empathize.
1 year ago
6
3
0
0
Google Search Antitrust Ruling is a Triumph for Behavioral Economics | TechPolicy.Press
The importance of insights about consumer behavior in the ruling mark the intersection of antitrust and behavioral economics, writes Zander Arnao.
In the Google search trial, the court's reliance on evidence of how consumer behavior is shaped by what products and services are available by default marks a pivotal moment at the intersection of antitrust and behavioral economics, writes Zander Arnao: www.techpolicy.press/google-searc...
1 year ago
9
6
0
0
For example, one could estimate levels
of individualism or intergroup prejudice across time by administering psychological scales to samples of simulated participants from adjacent historical eras. And one could generate
I guess we’re doing this again. It is enough for LLMs to be incredibly powerful and flexible tools for analyzing and summarizing text. We don’t have to fool ourselves into thinking that they can reconstruct the mental processes of the humans who produced the text. They are already impressive!
1 year ago
92
20
6
7