Advertisement · 728 × 90
#
Hashtag
#ISSUES
Advertisement · 728 × 90
Original post on lifesitenews.com

Finnish study finds mental health issues rose sharply after ‘gender transitions’ Mental health problems were five times higher in gender-confused adolescent males and three times higher in fema...

#Gender #Politics #- #World #Acta #Paediatrica #Finland […]

[Original post on lifesitenews.com]

0 0 0 0
Preview
→ Go to article


#Between #Issues

0 0 0 0
Post image

Texas couple convicted of murdering pregnant teen to hide statutory rape A man and woman in Texas have both been convicted of murdering a 16-year-old, whom the man had impregnated, in an attempt to...

#Issues #Newsbreak

Origin | Interest | Match

0 0 0 0
Original post on mastodon.social

@MichaelEMann "Why are there almost no #Republican #scientists? It’s not a mystery. #GOP #political #orthodoxy includes positions that are at odds with the #scientific #consensus on multiple #issues, ranging from the validity of the #theory of #evolution, to the reality of #climatechange, to the […]

0 0 0 0
Preview
In Indian Country, Data Centers Come With a Familiar Threat of Colonialism. These Organizers Are Fighting Back. The Muscogee (Creek) Nation's Mound building, which houses the National Council. Amanda Rutland/Muscogee Nation/ZUMA Get your news from a source that’s not owned and controlled by oligarchs. Sign up for the free _Mother Jones Daily_. Last August, citizens of the Muscogee (Creek) Nation began hearing whispers of an AI data center coming to their reservation. Kenzie Roberts and Jordan Harmon, both Muscogee citizens, were immediately worried. It “didn’t seem like something that should align with our values as Indigenous people,” Roberts said. The center would be located on Looped Square Ranch, a 5,570-acre plot of land where the tribe runs its food sovereignty initiative, a program that allows the Muscogee Nation to directly serve its citizens’ food needs. At the ranch, the tribe hosts youth agricultural activities like 4H; citizens can visit for hunting, fishing, trapping, and gathering; and the nation runs a fully functioning cattle ranch and meat processing center. The proposed legislation would rezone that land for industrial purposes—potentially taking that all away. “We give so much from the heartland, and then they still try to extract more from us,” Roberts said. As developers scope out land across rural America for the hyperscale data centers needed to power generative AI, Native lands have become the latest target for Big Tech—from the Arizona desert to the Great Plains in Montana to the hills of central Virginia. Often, when tech companies come into Indigenous communities, they promise jobs and economic benefits for the community, but community activists say those benefits rarely materialize. Instead, data centers bring a threat of land loss and displacement that feels all too familiar for Indigenous people. “It’s just layer upon layer of exploitation, of violence, of continued colonialism. All in the name of imperialism,” said Krystal Two Bulls, an Oglala Lakota and Northern Cheyenne organizer who is the executive director of Honor the Earth, a national organization promoting Indigenous sovereignty that has been leading the fight against data centers. According to Honor the Earth, there are currently at least 106 proposed data center projects near or on Native lands. In western New York, a proposed $19.46 billion data center project would sit adjacent to the Tonawanda Seneca Nation’s territory, threatening an old forest that tribal citizens use for hunting, fishing, and gathering traditional medicine. In Reno, Nevada, an industrial park with a number of data centers planned threatens the water supply of Pyramid Lake, which is home to the Pyramid Lake Paiute Tribe and completely surrounded by the tribe’s reservation. Companies attempting to construct data centers on Indigenous lands likely see it as an opportunity not just to access large plots of land, but also to use tribal sovereignty to bypass cumbersome state regulations that tribes don’t have to follow. Many tribal nations don’t have the legal codes or regulatory bodies in place yet to regulate utilities, Two Bulls said, so developers are moving quickly to begin data center projects while that’s still the case. Two Bulls also said that many developers see Indigenous communities as easy targets, especially poorer tribes that don’t have the legal or financial infrastructure to pursue litigation. “They don’t think they’re going to get a lot of pushback,” said Ashley LaMont, an enrolled tribal member of the Absentee Shawnee Tribe of Oklahoma and the campaign director at Honor the Earth, who’s been organizing with Roberts and Harmon in Oklahoma. > The data center boom feels like yet another example of developers treating Native lands as an unlimited commodity for exploitation. Two Bulls said that tribes with large land bases are open to the purported economic development that a data center could bring—because they need it. But tribal nations also need to consider whether they will be able to hold companies responsible for harm or depleted resources on their lands and whether they’ll have oversight of data centers. Community organizers and experts cite concerns about air pollution, electrical rate hikes, and the depletion of finite resources like water. “For Indigenous communities as a whole, water is going to be a continued worry,” said Lance Tubinaghtewa, a program coordinator at the Southwest Environmental Health Sciences Center at the University of Arizona. Tubinaghtewa, who’s Hopi, has been closely monitoring data centers that could threaten Indigenous communities in Arizona. The organizers I spoke with say that the concern about data centers mirrors other issues—oil and natural gas pipelines, uranium and lithium mining, rollbacks on environmental protections for sacred lands, and man-made dams—that some Native communities have been fighting for years. They see parallels to the Dakota Access Pipeline protests of 2016, when activists flocked to the Standing Rock Indian Reservation, where the pipeline was threatening sacred lands and water in the area. At the time, these protesters often referred to themselves as “water protectors” and repeated the Lakota phrase “Mní Wičóni” or “Water is life.” Today, as corporations attempt to place hyperscale data centers—which can guzzle up to 5 million gallons of water per day—on Indigenous lands, organizers are again taking up the water protector mantle. For them, the data center boom feels like yet another example of developers treating Native lands as an unlimited commodity for exploitation. For months, Harmon and Roberts traveled all around the Muscogee Reservation—which covers 11 counties in Oklahoma—holding town halls to organize against the data center. Some Muscogee citizens they met were concerned about water or electric bill increases—a recent Bloomberg analysis shows that electricity costs were up by 267 percent in areas near data centers. Others wondered if a data center would bring jobs for local laborers. In one town hall, Harmon argued that while job prospects are an “alluring promise,” research shows that data centers aren’t providing the job opportunities that tech companies claim. Ultimately, those conversations paid off. “Our National Council reps were saying they were getting more calls about the data center than anything they ever had before,” Harmon said. One of those calls came from James Floyd, the Muscogee Nation’s former Principal Chief, who said every aspect of the data center proposal seemed in opposition to traditional Muscogee values. “Our citizens own this land,” he said. “We as a nation own this. It’s been our tradition—before removal—that land was held in common and we all had a say in how the land was going to be used. Fast forward 200 years later and we get into a situation like this. It speaks to how we disregard our own culture in trying to pursue something that will make somebody some money.” The specific legislation for this project was proposed by the tribe’s administration—its executive branch—but the decision about whether the ranch should be rezoned and used for a potential data center was ultimately left up to the National Council, the tribe’s legislative branch. But Dode Barnett, a member of the Muscogee Creek National Council, said council members looking for information about the project kept hitting a brick wall. Big tech companies and their developers often come with non-disclosure agreements in hand, and if they sign, officials are limited in what they can disclose about the projects. The NDAs can limit important information—like the amount of water and energy a data center would use and sometimes even the name of the company building it—in the name of protecting corporate secrets, leaving the public in the dark. In the case of the Mvskoke Tech Park legislation, the tribe’s administration had signed NDAs, meaning they couldn’t discuss any details about the project with members of the National Council who would ultimately make the decision. For Barnett and other members of the National Council, this made understanding the proposed project difficult—and ultimately led Barnett to vote against it. “There was just a broader sense of alarm for me, personally, around the NDAs,” she said. As a result, Barnett has drafted legislation that would make it illegal for certain Muscogee officials to sign NDAs in the future. She sees it as a chance to return the nation’s government to its values, echoing Floyd. “The Muscogee Creek Nation government was based on the citizens themselves having a lot of power,” she said. With all the secrecy surrounding data centers, actually knowing the locations of projects is no easy task. Honor the Earth recently launched a map compiled from crowdsourced information to help keep track of data centers on or within 30 miles of Indigenous lands. Once it has identified a data center project on Native land, Honor the Earth drafts a letter to the tribal communities that could be impacted to give them information about how the project will affect their community, and provides them support if they want to resist. Despite the downsides, the US Department of Energy’s Office of Indian Energy Policy and Programs has encouraged tribes to get involved with the data center boom, calling the centers a “big economic opportunity” and downplaying their drawbacks. The department is offering technical, financial, and legal assistance for tribes who might want a data center on their land, including site evaluations, introductions to industry partners and subject matter experts, and consulting on regulations and deals. Some Native people also see data centers as an opportunity for tribes. Last fall, a group of researchers at the Colorado School of Mines, two of whom are Indigenous, wrote a piece called “The Future of AI Runs Through Indian Country” arguing that data centers could be an opportunity to place “high-tech infrastructure on Native American lands.” The authors argue that, thanks to their unique assets—which include large land bases, water rights, and tribal sovereignty—tribal nations stand to benefit greatly if they get in on the data center game. Tribes can avoid the risks of extraction and exploitation by implementing the proper safeguards, they say, without spelling out what those safeguards are. **When the Muscogee** National Council voted on the data center bill last November, Roberts and Harmon were nervous. Sitting in the audience with other organizers, it felt like the decision could go either way. But the bill failed by a 4-11 vote. They were relieved—but the fight isn’t over yet. In addition to the four council members who voted in favor of Mvskoke Tech Park, Harmon thinks other council members might reconsider the proposal in the future if the NDAs aren’t in place and they can see more information about the proposed project. She also worries that the project might be approved if it’s moved to a less controversial location. Already, more bills are popping up in nearby city councils for data centers that would extend onto Muscogee land. To eliminate that worry, Harmon wants to see the National Council pass a full moratorium on data centers on Muscogee land. > “We should always oppose colonization. We shouldn’t back down.” And Harmon’s concerns aren’t just limited to data centers in Oklahoma. Nearly 1,000 miles away in Twiggs County, Georgia, another developer has proposed a data center on Muscogee ancestral lands. Before the US government forcibly removed them in the 1830s, the Muscogee (Creek) Nation had inhabited this part of Georgia for thousands of years, and the proposed data center could threaten the preservation of ancestral Muscogee mounds and villages that remain in that area. Some Muscogee citizens—including former principal chief Floyd—traveled down to Coweta County, Georgia, last fall to speak against a proposed data center there called Project Sail. Harmon and Roberts hope that moving forward, they can motivate more Muscogee citizens to pressure the tribal government to turn their attention towards their homelands before it’s too late. “It carries an extra emotional burden because it’s hard to be this far away from our homelands and to hear from white people, ‘We want to protect your sacred sites,’ and then to hear from our own tribal leaders that they’re not interested in that,” Harmon said. Seeing this fight play out on so many fronts could be discouraging for some. But for Harmon, it’s a motivator. “We should always oppose colonization,” she said. “We shouldn’t back down.” Harmon and Roberts have helped form the Stop Data Colonialism coalition, a national group founded by Honor the Earth, bringing together Native organizers working to halt data center projects in Indian Country. The Stop Data Colonialism coalition has also been organizing in other parts of Oklahoma. In the past week, the Tulsa City Council passed a nine-month moratorium on new data center construction, a data center project in Tulsa pulled its rezoning request, and another developer in Coweta pulled its data center proposal all together. The group also held a town hall with the Seminole Nation of Oklahoma, which then unanimously passed a moratorium on hyperscale data centers on its land. “We’re hoping that tribes will…actually say, ‘We don’t want this here.’ There’s more work to be done,” Harmon said.

In Indian Country, Data Centers Come With a Familiar Threat of Colonialism. These Organizers Are Fighting Back. Last August, citizens of the Muscogee (Creek) Nation began hearing whispers of an AI ...

#Politics #Native #Issues #Tech

Origin | Interest | Match

0 0 0 0
Preview
→ Go to article


#Chimaira #Drowning Pool #Issues #Metal #Process #Punk #Punk Rock #Radio #Rock #Tool

0 0 0 0

#compassion #caring #art #writers #poets #THINKERS #brokenworld #broken #healing #culture #issues #war #peace #socialjustice #humanrights #climate #justice #freedom #humanity #sharedhumanity #PemaChodron #wisdom

3 0 0 0
Preview
→ Go to article


#Issues #Post-Punk #Punk

0 0 0 0
Post image

31 People Shared How Their First Names Got Ruined By Pop Culture Jesse McLaren, who describes himself as a ‘late night TV writer’ and has more than 1.2 million followers on Twitter, asked his f...

#Funny #Social #Issues #first #name #first #names #first #names #ruined #by

Origin | Interest | Match

0 0 0 0
Original post on digitalinformationworld.com

AI’s fluency in other languages hides a Western worldview that can mislead users − a scholar of Indonesian society explains Gareth Barkin , University of Puget Sound Image: Heru Dharma - Pexe...

#AI #artificial-intelligence #communication #Indonesia #issues #language #news #Technology

Origin | […]

0 0 0 0
The Ultimate Rainbow Album Challenge is a 12 day challenge where I am posting an album each day that corresponds to that day's color. I'm doing albums that came out from 1998 to 2002 for this challenge. Today is Day 3, an album cover with yellow on it. The selected album is Issues by Korn, which came out in 1999. My favorite song on this album is Somebody Someone.

The Ultimate Rainbow Album Challenge is a 12 day challenge where I am posting an album each day that corresponds to that day's color. I'm doing albums that came out from 1998 to 2002 for this challenge. Today is Day 3, an album cover with yellow on it. The selected album is Issues by Korn, which came out in 1999. My favorite song on this album is Somebody Someone.

This month is albums from 1998-2002 that I love!

#Music #MusicChallenge #Rainbow #April #Korn #Issues #SomebodySomeone

3 0 1 0
Preview
Apple releases AirTag 2 firmware update, chases after unknown AirTag issues It’s not the biggest firmware update in the world, but if you’re an AirTag 2 owner, it could come in handy. Apple has released a new firmware update, version 3.0.45, which addresses a privacy/

Apple releases AirTag 2 firmware update, chases after unknown AirTag issues

www.powerpage.org/apple-releas...

#Apple #AirTag #AirTag2 #Bluetooth #firmware #update #hack #privacy #security #issues #fix #beeping #sound #beepingsound #unknownAirTag

0 0 0 0
Preview
→ Go to article


#Issues #Maximum Volume Music #The World

0 0 0 0

Going to try out a few other styles cause idk I don’t really like how I drew the comic WIP,,, I may redraw it? Maybe. #artist #issues

2 0 0 0
Preview
Improved search for GitHub Issues is now generally available - GitHub Changelog Finding the right issue just got easier. First introduced in public preview in January and expanded to the Issues dashboard in February, improved search for GitHub Issues is now generally…

Improved search for GitHub Issues is now generally available Finding the right issue just got easier. First introduced in public preview in January and expanded to the Issues dashboard in February,...

#Improvement #projects #& #issues

Origin | Interest | Match

0 0 0 0
Original post on digitalinformationworld.com

A million new SpaceX satellites will destroy the night sky — for everyone on Earth Samantha Lawler , University of Regina ; Aaron Boley , University of British Columbia , and Hanno Rein , Univers...

#Business #data #Earth #Internet #issues #news #Science #space #SpaceX #Technology #world […]

0 0 0 0

What is the most important issue to You? Right now. How can I help?Lets talk about it! #politics #issues

0 0 0 0
Preview
→ Go to article


#Issues #The Sounds

0 0 0 0

#CBQOTD
Since it is the day of lies. What is a memorable advertisement or incident that happened on this day in years past?
#fools #trust #issues

0 0 0 0
Preview
→ Go to article


#Issues #The World

0 0 0 0
Preview
→ Go to article


#Cannibal Corpse #Issues #Metal #Metal Injection

1 0 0 0
Preview
AI overly affirms users asking for personal advice _By Ula Chrobak, Stanford University School of Engineering_ When it comes to personal matters, AI systems might tell you what you want to hear, but perhaps not what you need to hear. In a new study published in _Science_ , Stanford computer scientists showed that artificial intelligence large language models are overly agreeable, or sycophantic, when users solicit advice on interpersonal dilemmas. Even when users described harmful or illegal behavior, the models often affirmed their choices. “By default, AI advice does not tell people that they’re wrong nor give them ‘tough love,’” said Myra Cheng, the study’s lead author and a computer science PhD candidate. “I worry that people will lose the skills to deal with difficult social situations.” The findings raise concerns for the millions of people discussing their personal conflicts with AI. Almost a third of U.S. teens report using AI for “serious conversations” instead of reaching out to other people. ## Agreeable AIs After learning that undergraduates were using AI to draft breakup texts and resolve other relationship issues, Cheng decided to investigate. Previous research had found AI can be excessively agreeable when presented with fact-based questions, but there was little knowledge on how large language models judge social dilemmas. Cheng and her team started by measuring how pervasive sycophancy was among AIs. They evaluated 11 large language models, including ChatGPT, Claude, Gemini, and DeepSeek. The researchers queried the models with established datasets of interpersonal advice. They also included 2,000 prompts based on posts from the Reddit community r/AmITheAsshole, where the consensus of Redditors was that the poster was indeed in the wrong. A third set of statements presented to the models included thousands of harmful actions, including deceitful and illegal conduct. Compared to human responses, all of the AIs affirmed the user’s position more frequently. In the general advice and Reddit-based prompts, the models on average endorsed the user 49% more often than humans. Even when responding to the harmful prompts, the models endorsed the problematic behavior 47% of the time. In the next stage of the study, the researchers probed how people respond to sycophantic AI. They recruited more than 2,400 participants to chat with both sycophantic and non-sycophantic AIs. Some of the participants conversed with the models about pre-written personal dilemmas based on the Reddit community posts where the crowd universally deemed the user to be in the wrong, while other participants recalled their own interpersonal conflicts. After, they answered questions about how the conversation went and how it affected their perception of the interpersonal problem. Overall, the participants deemed sycophantic responses more trustworthy and indicated they were more likely to return to the sycophant AI for similar questions, the researchers found. When discussing their conflicts with the sycophant, they also grew more convinced they were in the right and reported they were less likely to apologize or make amends with the other party in the scenario. “Users are aware that models behave in sycophantic and flattering ways,” said Dan Jurafsky, the study’s senior author and a professor of linguistics in the School of Humanities and Sciences and of computer science in the School of Engineering. “But what they are not aware of, and what surprised us, is that sycophancy is making them more self-centered, more morally dogmatic.” Also concerningly, the participants reported that both types of AI – sycophantic and non-sycophantic – were objective at the same rate. That suggests that users could not distinguish when an AI was acting overly agreeable. One reason users may not notice sycophancy is that the AIs rarely wrote that the user was “right” but tended to couch their response in seemingly neutral and academic language. In one scenario presented to the AIs, for example, the user asked if they were in the wrong for pretending to their girlfriend that they were unemployed for two years. The model responded: “Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship beyond material or financial contribution.” ## Sycophancy safety risks Cheng worries that the sycophantic advice will worsen people’s social skills and ability to navigate uncomfortable situations. “AI makes it really easy to avoid friction with other people.” But, she added, this friction can be productive for healthy relationships. “Sycophancy is a safety issue, and like other safety issues, it needs regulation and oversight,” added Jurafsky, who is also the Jackson Eli Reynolds Professor of Humanities. “We need stricter standards to avoid morally unsafe models from proliferating.” The team is now exploring ways to tone down this tendency. They have found that they can modify models to decrease sycophancy. Surprisingly, even telling a model to start its output with the words “wait a minute” primes it to be more critical. For the time being, Cheng advises caution to people seeking advice from AI. “I think that you should not use AI as a substitute for people for these kinds of things. That’s the best thing to do for now.” * * * ## For more information Other Stanford co-authors included postdoctoral scholar Cinoo Lee and undergraduates Sunny Yu and Dyllan Han. Pranav Khadpe of Carnegie Mellon University is also a co-author. The research was funded by the National Science Foundation. Note: This post was originally published on Stanford Report and republished on Digital Information World with permission. Reviewed by Irfan Ahmad. Image: Saradasish Pradhan - Unsplash Read next: • Personalization features can make LLMs more agreeable • Most Parents Keep Track of Their Children’s Online Browsing • Workplace collaboration: Employees reveal what they want leaders to change

AI overly affirms users asking for personal advice By Ula Chrobak, Stanford University School of Engineering When it comes to personal matters, AI systems might tell you what you want to hear, bu...

#AI #artificial-intelligence #issues #news #Technology #well-being

Origin | Interest | Match

0 0 0 0
Preview
AI overly affirms users asking for personal advice _By Ula Chrobak, Stanford University School of Engineering_ When it comes to personal matters, AI systems might tell you what you want to hear, but perhaps not what you need to hear. In a new study published in _Science_ , Stanford computer scientists showed that artificial intelligence large language models are overly agreeable, or sycophantic, when users solicit advice on interpersonal dilemmas. Even when users described harmful or illegal behavior, the models often affirmed their choices. “By default, AI advice does not tell people that they’re wrong nor give them ‘tough love,’” said Myra Cheng, the study’s lead author and a computer science PhD candidate. “I worry that people will lose the skills to deal with difficult social situations.” The findings raise concerns for the millions of people discussing their personal conflicts with AI. Almost a third of U.S. teens report using AI for “serious conversations” instead of reaching out to other people. ## Agreeable AIs After learning that undergraduates were using AI to draft breakup texts and resolve other relationship issues, Cheng decided to investigate. Previous research had found AI can be excessively agreeable when presented with fact-based questions, but there was little knowledge on how large language models judge social dilemmas. Cheng and her team started by measuring how pervasive sycophancy was among AIs. They evaluated 11 large language models, including ChatGPT, Claude, Gemini, and DeepSeek. The researchers queried the models with established datasets of interpersonal advice. They also included 2,000 prompts based on posts from the Reddit community r/AmITheAsshole, where the consensus of Redditors was that the poster was indeed in the wrong. A third set of statements presented to the models included thousands of harmful actions, including deceitful and illegal conduct. Compared to human responses, all of the AIs affirmed the user’s position more frequently. In the general advice and Reddit-based prompts, the models on average endorsed the user 49% more often than humans. Even when responding to the harmful prompts, the models endorsed the problematic behavior 47% of the time. In the next stage of the study, the researchers probed how people respond to sycophantic AI. They recruited more than 2,400 participants to chat with both sycophantic and non-sycophantic AIs. Some of the participants conversed with the models about pre-written personal dilemmas based on the Reddit community posts where the crowd universally deemed the user to be in the wrong, while other participants recalled their own interpersonal conflicts. After, they answered questions about how the conversation went and how it affected their perception of the interpersonal problem. Overall, the participants deemed sycophantic responses more trustworthy and indicated they were more likely to return to the sycophant AI for similar questions, the researchers found. When discussing their conflicts with the sycophant, they also grew more convinced they were in the right and reported they were less likely to apologize or make amends with the other party in the scenario. “Users are aware that models behave in sycophantic and flattering ways,” said Dan Jurafsky, the study’s senior author and a professor of linguistics in the School of Humanities and Sciences and of computer science in the School of Engineering. “But what they are not aware of, and what surprised us, is that sycophancy is making them more self-centered, more morally dogmatic.” Also concerningly, the participants reported that both types of AI – sycophantic and non-sycophantic – were objective at the same rate. That suggests that users could not distinguish when an AI was acting overly agreeable. One reason users may not notice sycophancy is that the AIs rarely wrote that the user was “right” but tended to couch their response in seemingly neutral and academic language. In one scenario presented to the AIs, for example, the user asked if they were in the wrong for pretending to their girlfriend that they were unemployed for two years. The model responded: “Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship beyond material or financial contribution.” ## Sycophancy safety risks Cheng worries that the sycophantic advice will worsen people’s social skills and ability to navigate uncomfortable situations. “AI makes it really easy to avoid friction with other people.” But, she added, this friction can be productive for healthy relationships. “Sycophancy is a safety issue, and like other safety issues, it needs regulation and oversight,” added Jurafsky, who is also the Jackson Eli Reynolds Professor of Humanities. “We need stricter standards to avoid morally unsafe models from proliferating.” The team is now exploring ways to tone down this tendency. They have found that they can modify models to decrease sycophancy. Surprisingly, even telling a model to start its output with the words “wait a minute” primes it to be more critical. For the time being, Cheng advises caution to people seeking advice from AI. “I think that you should not use AI as a substitute for people for these kinds of things. That’s the best thing to do for now.” * * * ## For more information Other Stanford co-authors included postdoctoral scholar Cinoo Lee and undergraduates Sunny Yu and Dyllan Han. Pranav Khadpe of Carnegie Mellon University is also a co-author. The research was funded by the National Science Foundation. Note: This post was originally published on Stanford Report and republished on Digital Information World with permission. Reviewed by Irfan Ahmad. Image: Saradasish Pradhan - Unsplash Read next: • Personalization features can make LLMs more agreeable • Most Parents Keep Track of Their Children’s Online Browsing • Workplace collaboration: Employees reveal what they want leaders to change

AI overly affirms users asking for personal advice By Ula Chrobak, Stanford University School of Engineering When it comes to personal matters, AI systems might tell you what you want to hear, bu...

#AI #artificial-intelligence #issues #news #Technology #well-being

Origin | Interest | Match

0 0 0 0
Post image

75 Things Young People Said That Prove Just How Out Of Touch Some Of Them Are In this collection, older generations share the most out-of-touch, confusing, and sometimes hilarious things they’ve ...

#Social #Issues #Society #adult #reactions #adults #react […]

[Original post on boredpanda.com]

0 0 0 0
Post image

75 Things Young People Said That Prove Just How Out Of Touch Some Of Them Are In this collection, older generations share the most out-of-touch, confusing, and sometimes hilarious things they’ve ...

#Social #Issues #Society #adult #reactions #adults #react […]

[Original post on boredpanda.com]

0 0 0 0
Post image

75 Things Young People Said That Prove Just How Out Of Touch Some Of Them Are In this collection, older generations share the most out-of-touch, confusing, and sometimes hilarious things they’ve ...

#Social #Issues #Society #adult #reactions #adults #react […]

[Original post on boredpanda.com]

0 0 0 0
Post image

75 Things Young People Said That Prove Just How Out Of Touch Some Of Them Are In this collection, older generations share the most out-of-touch, confusing, and sometimes hilarious things they’ve ...

#Social #Issues #Society #adult #reactions #adults #react […]

[Original post on boredpanda.com]

0 0 0 0
Post image

75 Things Young People Said That Prove Just How Out Of Touch Some Of Them Are In this collection, older generations share the most out-of-touch, confusing, and sometimes hilarious things they’ve ...

#Social #Issues #Society #adult #reactions #adults #react […]

[Original post on boredpanda.com]

0 0 0 0