Advertisement · 728 × 90
#
Hashtag
#ChildOnlineSafety
Advertisement · 728 × 90

And it's more common than most parents realise.

Tomorrow, I'm posting about what shadow accounts are actually telling you, and what you can practically do about it. Both the technical side and the part that matters even more.

Follow so you don't miss it. 👇

#ChildOnlineSafety #DigitalParenting

0 1 1 0
Post image

WhatsApp Launches Parent-Managed Accounts
Read More: buff.ly/VrfyhlV

#WhatsAppSafety #ParentControls #ChildOnlineSafety #EndToEndEncryption #AccountSecurity #DigitalWellbeing #MessagingSecurity #PrivacyTech

0 0 0 0
Preview
House Energy and Commerce committee adopts broad Kids Internet and Digital Safety Act amid partisan dispute over enforcement and preemption The House committee advanced the Kids Internet and Digital Safety Act after contentious debate over whether the bill removes a duty-of-care standard and preempts stronger state laws. Republicans said the package strengthens parental tools; Democrats said it leaves families worse off and shields tech platforms.

The House has passed the contentious Kids Internet and Digital Safety Act, sparking fierce debate over its impact on protecting children online.

Learn more here

#US #DigitalProtection #CitizenPortal #ChildOnlineSafety #ParentalControls

0 0 1 0
Preview
Germany Social Media Ban For Kids: Shocking New Rules Hit! Germany social media ban for kids under 14 gains ruling party support. Fines loom for platforms; Europe follows suit fast.

Germany Social Media Ban for Kids: Shocking New Rules Hit!
#GermanySMban #ChildOnlineSafety #DigitalAgeRegulation
www.squaredtech.co/germany-soci...

0 0 0 0
Assembly committee advances New Jersey package to tighten online protections for children The Assembly Science, Innovation and Technology Committee released three bills that would require stronger default privacy settings for minors, require mental‑health warnings on platforms and fund a research center to study social media’s effects on youth; the committee voted to advance the measures to the full Assembly.

New Jersey is taking a bold step to protect children online with a groundbreaking package of bills aimed at enhancing privacy, mental health warnings, and social media accountability.

Learn more here

#NJ #CitizenPortal #ChildOnlineSafety #PrivacyProtection #NewJerseyChildren

0 0 0 0
Video

Mother’s Campaign Leads to Jools’ Law Reform
#JoolsLaw
#OnlineSafety
#ChildOnlineSafety
#DigitalSafety
#SocialMediaSafety

0 0 0 0
Preview
Social Media Ban For Children Gains Global Momentum As Governments Tighten Rules A growing social media ban for children is spreading from Australia to Europe and Asia as lawmakers target online risks.

Social Media Ban for Children Gains Global Momentum as Governments Tighten Rules #ChildOnlineSafety #SocialMediaBan #InternetRegulation
www.squaredtech.co/social-media...

0 0 0 0
Post image

Spain To Ban Social Media For Kids
Read More: buff.ly/KuglHnw

#ChildOnlineSafety #AgeVerification #PlatformRegulation #DigitalPolicy #YouthProtection #SocialMediaLaw #TechRegulation #SpainNews

0 0 0 0
Preview
Senate Commerce hearing spotlights bipartisan push to curb youth social media use and tighten AI safeguards Senators of both parties pressed for stronger federal limits on children's access to social media and AI chatbots during a Senate Commerce Committee hearing featuring four experts who recommended higher age thresholds, bell‑to‑bell school bans and design‑based safeguards. Lawmakers debated COSMO, COPPA 2, E‑Rate and other proposals and set deadlines for follow‑up questions.

Senate lawmakers are taking a stand against the dangers of social media and AI for children, pushing for stricter age limits and safeguards to protect youth from digital harm.

Learn more here

#US #DigitalWellbeing #SenateCommerce #ChildOnlineSafety #YouthMentalHealth #CitizenPortal

0 0 0 0

#CultureReframed #PornHarms #BecausePornHurtsKids #PornCrisis #PornCulture #HypersexualizedMedia #DigitalWellness #DigitalSafety #ChildOnlineSafety #OnlineSafety #OnlinePrivacy #SocialMedia #KidsOnlineSafetyAct #ChildOnlineSafetyAct #AgeVerification

0 0 0 0

#CultureReframed #PornHarms #BecausePornHurtsKids #PornCrisis #PornCulture #HypersexualizedMedia #DigitalWellness #DigitalSafety #MediaLiteracy #DigitalLiteracy #ChildOnlineSafety #OnlineSafety #OnlinePrivacy #SocialMedia #ParentingTips #DigitalParentingTips #ParentingInTheDigitalAge

0 0 0 0
Preview
RECORDING - E&C Hearing: Legislative Solutions To Protect Children And Teens Online - ISOC LIVE NOTICEBOARD VIDEO | AUDIO | RECAP |  ARCHIVE | PERMALINK On December 2, 2025, the House Committee on Energy and Commerce, Subcommittee on Commerce, Manufacturing, and Trade held a hearing titled ‘Legislative Solu...

RECORDING – E&C Hearing: Legislative Solutions To Protect Children And Teens Online @kateruane.bsky.social #KOSA #COPPA20 #ChildOnlineSafety #YouthOnlineSafety #OnlineSafety isoc.live/19906

0 0 0 0
Preview
Malaysia To Ban Social Media For Under-16s In 2026 Malaysia plans to ban social media use for under-16s next year, joining global efforts to protect children on digital platforms.

Malaysia to Ban Social Media for Under-16s in 2026 #ChildOnlineSafety #DigitalProtection #MalaysiaTechLaws
www.squaredtech.co/malaysia-ban...

0 0 0 0

#CultureReframed #PornHarms #BecausePornHurtsKids #PornCrisis #DigitalWellness #DigitalSafety #ChildOnlineSafety #OnlineSafety #OnlinePrivacy #SocialMedia #KOSA #KidsOnlineSafetyAct #ChildOnlineSafetyAct #OfCom #AgeVerification #OnlineSafetyLegislation #TraffickingHub #PornCritical #SexPositive

0 0 0 0
Post image

Strengthening child protection starts with learning together. 🌍
Today, NGOs from Various African Countries visited Kenya’s specialised law enforcement unit to benchmark best practices and explore models that can be replicated across Africa.

#ACOSASummit2025 #ChildOnlineSafety

#ACOSA

0 0 0 0

TLDR (2/4):

Roblox has since agreed to roll out age-assurance technology & strengthen privacy defaults — including preventing adults from contacting children without parental consent.

#DRAGON #onlinesafety #onlinesafetyact #onlineharms #childprotection #childonlinesafety #Roblox

0 1 1 0

TLDR (1/4):

The platform, used primarily by 5- to 13-year-olds, has come under scrutiny after reports that predators were using it to “hunt” & exploit young girls.

#DRAGON #onlinesafety #onlinesafetyact #childprotection #childonlinesafety #childsafeguarding

0 0 1 0
Post image Post image

In today's digital age, young people spend more time online than ever before, making it crucial to equip them with the skills to navigate the internet safely.
#ChildOnlineSafety #CyberSecurity

3 0 0 0
Preview
Chatbots and Children in the Digital Age The rapid evolution of the digital landscape, especially in the area of social networking, is likely to have an effect on the trend of children and teens seeking companionship through artificial intelligence. This raises some urgent questions about the safety of these interactions.  In a new report released on Wednesday by Common Sense Media, the nonprofit organisation has warned that companion-style artificial intelligence applications pose an unacceptable risk to young users, especially as they relate to mental health, privacy, and emotional well-being.  Following the suicide of a 14-year-old boy whose final interactions with a chatbot on the platform Character.AI last year, concerns about these bots gained a significant amount of attention. It was in that case that conversational AI apps became the focus of attention, which sparked further scrutiny of how they affect young people's lives and prompted calls for greater transparency, accountability, and safeguards to keep vulnerable users safe from the darker sides of digital companionship.  Artificial intelligence chatbots and companion apps have become increasingly commonplace in children's online experiences, offering entertainment, interactive exchanges, and even learning tools. Although these technologies are appealing, experts say that they can also come with a range of risks that should not be overlooked, as well as their appeal.  In spite of platforms' routine collection and storage of user data, often without adequate protection for children, privacy remains a central issue. Despite the use of filters, chatbots may produce unpredictable responses, resulting in harmful or inappropriate content being displayed to young users.  A second concern that researchers have is the emotional dependence some children may develop on these AI companions, a bond that, according to researchers, may interfere with their real-world relationships and social development.  Similarly, there is the risk of misinformation because AI systems do not always provide accurate answers, leaving children vulnerable to receiving misleading advice. It is difficult for children to navigate digital companionship in light of these factors, including persuasive design features, in-app purchases and strategies aimed at maximising their screen time, which combine to create a complex and sometimes troubling environment.  Several advocacy groups have intensified their criticism of such platforms, highlighting that prolonged interactions with AI chatbots may lead to psychological consequences. Common Sense Media's recent risk assessment, carried out in conjunction with Stanford University School of Medicine researchers, was conducted with input from these researchers. It concluded that conversational agents are increasingly being integrated into video games and popular social media platforms, such as Instagram and Snapchat, in an effort to mimic human interaction in ways that require greater oversight.  The flexibility that makes these bots so engaging is also the risk of emotional entanglement that they pose, from casual friends to romantic partners and even a digital replacement for a deceased loved one, yet the very nature of the bots that makes them so engaging also makes them so risky. It was particularly evident that the dangers of such chatbots were highlighted when Megan Garcia filed a lawsuit against Character. AI to claim that her 14-year-old son, Sewell Setzer, committed suicide after developing a close relationship with a chatbot. It has been reported by the Miami Herald that, although the company has denied responsibility, asserted that safety is of utmost importance, and asked a Florida judge to dismiss the lawsuit based on free speech, the case has heightened broader concerns.  In response to his comments, Garcia has emphasised the importance of adopting protocols to manage conversations around self-harm, as well as reporting annual safety reports to the Office of Suicide Prevention in California. Separately, Common Sense Media urged companies to conduct risk assessments of systems marketed to children, and to ban the use of emotionally manipulative bots, initiatives strongly supported by the organisation.  There is a major problem with the anthropomorphic nature of AI companions that is at the heart of these disagreements. AI companions are designed to imitate human speech, personality, and conversational style. A person with such realistic features can create an illusion of trust and genuine understanding for a child or teenager, since they have a vivid imagination and less developed critical thinking skills.  It has already led to troubling results when the line between humans and machines is blurred. As an example, a nine-year-old boy who had his screen time restricted turned to a chatbot for guidance, only for it to be informed that it could understand why a child might harm their parents in response to “abuse” in his situation.  Another case is of a 14-year-old who developed romantic feelings for a character he created in a role-playing app and ended up taking his own life as a result of this connection. It has been highlighted that these systems can create a sense of empathy and companionship, but they are unable to think, feel, or create the stable, nurturing relationships that are essential for healthy childhood development.  Rather than fostering “parasocial” relationships, children may become attached emotionally to entities that are incapable of genuine care, leaving them vulnerable to manipulation, misinformation, and exposure to sexual content and violent images.  There is no doubt in my mind that these systems can have a profoundly destabilising effect on those already struggling with trauma, developmental difficulties, or mental health struggles, thus emphasising the urgent need for regulation, parental vigilance, and enhanced industry accountability. It is emphasised by experts that while AI chatbots pose real risks to children, parents can take practical steps to safeguard their children at the current time to reduce these risks.  Keeping children safe online is one of the most important measures, so AI companions need to be treated exactly like strangers online, and children shouldn't be left alone to interact with them without guidance. Establishing clear boundaries and, when possible, co-using the technology can assist in creating safer environment.  Open dialogue is equally important, too. A lot of experts recommend that, instead of policing, parents should encourage children to engage in conversation with chatbots by asking about the exchanges they are having with them and then using this exchange to encourage curiosity while also keeping an eye out for potential troubling responses.  In addition to using technology as part of the solution, parents can also use parental control and monitoring tools in order to keep track of their children's activities and to find out how much time they spend with their artificial intelligence companions. Fact-checking is also an integral part of safe use. Like an obsolete encyclopedia, chatbots can be useful for providing insight, but they are sometimes inaccurate as well.  Children need to be taught the importance of questioning answers and verifying other sources as soon as possible, according to experts. It is also important, however, to create screen-free spaces that reinforce real human connections and counterbalance the pull of digital companionship. For instance, family dinners, car rides, and other daily routines without screens should be carved out as soon as possible.  It is important to ensure these safeguards are implemented, given the growing mental health problems among children and teenagers. The theory of artificial intelligence being able to support emotional well-being is gaining popularity lately, but specialists caution that current systems do not have the capacity to deal with crises like self-harm or suicidal thoughts as they happen.  Currently, mental health professionals believe more collaboration with technology companies is crucial, but for the time being, the oldest and most reliable form of prevention is the one that is most effective and most reliable: human care and presence. In addition to talking with their children, parents need to pay attention to their digital interactions with their children, and they need to intervene if their children's dependence on artificial intelligence companions starts overtaking healthy relationships.  In one expert's opinion, a child who appears unwilling to put down their phone or is absorbed in chatbot conversations may require a timely intervention. AI companies are also being questioned by regulators about how they handle massive amounts of data that their users generate. Questions have been raised about privacy, commercialisation, and accountability as a result.  There are also issues under review with regard to monetisation of user engagements, sharing the personal data collected from chatbot conversations, and monitoring for potential harm associated with their products. In their investigation of the companies that are collecting data from children under 13 years of age, the Federal Trade Commission has emphasised how they are ensuring they are complying with the Children's Online Privacy Protection Act.  In addition to the risks in the home, there are also concerns over whether AI is being used properly in the classroom, where the growing pressure to incorporate artificial intelligence into education has raised concerns over compliance with federal education privacy laws. FERPA was passed in 1974 and protects the rights of children and parents in the educational system.  Fourteen years later, Amelia Vance, the president of the Public Interest Privacy Centre, warned schools that they may sometimes inadvertently violate the federal law if they are not vigilant regarding data sharing practices and if they rely on commercial chatbots like ChatGPT. Families who have not explicitly opted out of the use of chat queries to train AI systems raise questions about how this is handled, since many AI companies reserve the right to do so unless they specifically opt out.  Although policymakers and education leaders have emphasised the importance of AI literacy among young people, Vance highlighted that schools are not permitted to instruct students to use consumer-facing services whose data is processed outside of institutional control until parental consent has been obtained. The Act protects the privacy of students by safeguarding the information provided in FERPA, which is neither intended to compromise student privacy nor to breach it.  There are legitimate concerns about safety, privacy, and emotional well-being, but experts also acknowledge that artificial intelligence chatbots are not inherently harmful and can be useful for children when handled responsibly. Using these tools, children can be inspired to write stories, gain language and communication skills, and even practice social interactions in a controlled environment using low-stakes practice.  The potential of chatbots to support personalised learning has been highlighted by educators as they offer students instant explanations, adaptive feedback, and playful engagement, all of which will keep them motivated in the classroom. However, these benefits must be accompanied by a structured approach, thoughtful parental involvement, and robust safeguards that minimise the risk of harmful content or emotional dependency.  A balanced opinion emerging from child development researchers and experts has stated that AI companions, much like televisions and video games in years gone by, should not be regarded as replacing human interaction, but rather as supplements to it. By providing a safe environment, ethical guidelines and integrating them into healthy routines, children may be able to explore and learn in new ways when guided by adults.  Nevertheless, without oversight, the very same qualities that make these tools appealing—constant availability, personalisation, and human-like interaction—are also the ones that magnify potential risks. Considering these realities, children must be protected from harm from innovation through measured regulation, transparent industry practices, and proactive digital literacy education. This dual reality underscores the importance of ensuring children receive the benefits of innovation while remaining protected from its harm.  Children and adolescents are increasingly experiencing the benefits of artificial intelligence as it becomes a part of their daily lives, but they must maximise the benefits while minimising the risks. AI chatbots can indeed be used responsibly in order to inspire creativity, enhance learning, and facilitate the possibility of low-risk social experimentation, while complementing traditional education and fostering the development of skills as they go.  This suggests that there is no doubt that these tools can be dangerous for young users as a result of exposing them to privacy breaches, misinformation, manipulation of emotions, and psychological vulnerabilities, as has been demonstrated by the cases highlighted in recent reports on this topic. It is recommended that for children's digital experiences to be safeguarded, a multilayered approach is necessary.  In addition to parent involvement, educators should incorporate artificial intelligence thoughtfully into structured learning environments, and policymakers should enforce transparent industry standards to safeguard children's digital experiences. Various strategies can be implemented to help reinforce healthy digital habits in children, including encouraging critical thinking skills, fact-checking, and screen-free family time, while ongoing dialogue about online interactions can help children negotiate the blurred boundaries between humans and machines.  Family and institutional policies can make sure that Artificial Intelligence becomes a constructive tool for growth rather than a source of harm by fostering awareness, setting clear boundaries, and cultivating supportive real-life relationships that support the exploration, learning, and innovation of children in a digital age that is free from harm.

Chatbots and Children in the Digital Age #AIChatbots #AIineducation #ChildOnlineSafety

0 0 0 0
Post image

🚨 In 2024, #SafeOnline grantees:

✅ Reached 70M+ with digital safety messages
✅ Identified 800+ child victims of online abuse
✅ Trained 1,490+ law enforcement officers

Together, we are building a safer internet for every child 🌐

🔗 Learn more: loom.ly/TgQusjg

#ChildOnlineSafety #DigitalSafety

0 0 0 0
Video

In this short video, Family Network on Disabilities shares simple, practical tips to help parents protect themselves and their children from online scams. A few easy steps can go a long way in ensuring your family stays safe this school year. #BackToSchoolSafety
#OnlineSafetyTips #ChildOnlineSafety

0 0 0 0
Preview
Almost half of Australian internet users, victims of cybercrime: Govt report - Yes Punjab News Nearly half of Australians faced cybercrime in 12 months; govt expands social media ban to include YouTube for under-16s.

Almost half of Australian internet users, victims of cybercrime: Govt report yespunjab.com?p=152253

#Cybercrime #Australia #OnlineSafety #FraudPrevention #PasswordSecurity #DigitalSafety #YouTubeBan #ChildOnlineSafety #IdentityTheft #CyberSecurity #AICReport #AnthonyAlbanese

0 0 0 0

Mandy was thrilled to take part in this critical conversation on cyber risks and violence with Dr. Brendesha Tynes and Dr. Desmond Patton. 🧠📱Watch more highlights from the event: buff.ly/jdp7lj4

#ChildrenAndScreens #ChildOnlineSafety #KidsOnlineSafety #CultureReframed #PornHarms

1 0 0 0
Preview
Australia to include YouTube in under-16 social media ban - Yes Punjab News Australia adds YouTube to its under-16 social media ban, citing child safety concerns; government stands firm despite legal threats from Google.

Australia to include YouTube in under-16 social media ban yespunjab.com?p=147063

#AustraliaSocialMediaBan #YouTubeBan #ChildOnlineSafety #AnthonyAlbanese #AnikaWells #eSafetyCommissioner #GoogleVsAustralia #KidsOnlineProtection #TechRegulation #DigitalSafety #AgeAssurance #SocialMediaPolicy

0 0 0 0
Post image

📄 Read the full text of the bill:
🔗 legilist.com/bill.php?con...
#Congress #ChildOnlineSafety

0 0 0 0
Preview
California Assembly Considers AB 853 to Enhance Transparency in Synthetic Media Regulation California seeks to improve content transparency with AB 853 amid AI technology challenges.

California is taking bold steps to combat disinformation and protect children online with groundbreaking bills aimed at increasing transparency and safety in the digital world.

Click to read more!

#CA #CitizenPortal #InnovationEthics #ChildOnlineSafety #DigitalContentTransparency

0 0 0 0
Preview
Ofcom head calls age checks a ‘big moment’ for child online safety Melanie Dawes says rules will protect children from harmful content but campaigners are unconvinced

#childonlinesafety You are joking right how is this safe when you will be asked if you are old enough do really think a child who wants to see is going to click no that is the only age check you can do online yes or no question or date of birth www.theguardian.com/technology/2...

0 0 0 0
Preview
Australia’s World-First Social Media Ban for Under-16s Moves Closer After Successful Tech Trial - RMN News Australia's World-First Social Media Ban for Under-16s Moves Closer After Successful Tech Trial Over 50 companies participated in the trial, with Apple Inc. and Google, developers of the most popular ...

#AustraliaSocialMediaBan #AgeVerification #ChildOnlineSafety #TechTrial #SocialMediaLaw #Under16Ban #DigitalPlatforms #OnlineSafety #YouthProtection #RMNNews Australia’s World-First Social Media Ban for Under-16s Moves Closer After Successful Tech Trial. RMN News: rmnnews.com/2025/06/20/a...

0 0 0 0
Preview
House Committee votes on STOP CSE Act for enhanced child online safety protections Committee marks up STOP CSE Act to strengthen online protections for children and aid law enforcement

The Senate is taking bold steps to protect children online with the groundbreaking Stop Cease Stand Act, tackling the urgent crisis of digital exploitation head-on.

Learn more here!

#US #CitizenPortal #WashingtonDCChildren #ChildOnlineSafety #JudicialSecurity #LegislativeAccountability

0 0 0 0