Advertisement · 728 × 90
#
Hashtag
#ChatGPTOpenAI
Advertisement · 728 × 90
Preview
China Warns Government Staff Against Using OpenClaw AI Over Data Security Concerns  Recently, Chinese government offices along with public sector firms began advising staff not to add OpenClaw onto official gadgets - sources close to internal discussions say. Security issues are a key reason behind these alerts. As powerful artificial intelligence spreads faster across workplaces, unease about information safety has been rising too.  Though built on open code, OpenClaw operates with surprising independence, handling intricate jobs while needing little guidance. Because it acts straight within machines, interest surged quickly - not just among coders but also big companies and city planners. Across Chinese industrial zones and digital centers, its presence now spreads quietly yet steadily. Still, top oversight bodies along with official news outlets keep pointing to possible dangers tied to the app.  If given deep access to operating systems, these artificial intelligence programs might expose confidential details, wipe essential documents, or handle personal records improperly - officials say. In agencies and big companies managing vast amounts of vital information, those threats carry heavier weight. A report notes workers in public sector firms received clear directions to avoid using OpenClaw, sometimes extending to private gadgets. Despite lacking an official prohibition, insiders from a federal body say personnel faced firm warnings about downloading the software over data risks.  How widely such limits apply - across locations or agencies - is still uncertain. A careful approach reveals how Beijing juggles competing priorities. Even as officials push forward with plans to embed artificial intelligence into various sectors - spurring development through widespread tech adoption - they also work to contain threats linked to digital security and information control. Growing global tensions add pressure, sharpening concerns about who manages data, and under what conditions. Uncertainty shapes decisions more than any single policy goal.  Even with such cautions in place, some regional projects still move forward using OpenClaw. Take, for example, health-related programs under Shenzhen’s city government - these are said to have run extensive training drills featuring the artificial intelligence model, tied into wider upgrades across digital infrastructure. Elsewhere within the same city, one administrative area turned to OpenClaw when building a specialized helper designed specifically for public sector workflows.  Although national leaders call for restraint, some regional bodies might test limited applications tied to progress targets. Whether broader limits emerge - or monitoring simply increases - stays unclear. What happens next depends on shifting priorities at different levels. Recently joining OpenAI, Peter Steinberger originally created OpenClaw as an open-source initiative hosted on GitHub. Attention around the tool has grown since his new role became known.  When AI systems gain greater independence and embed themselves into daily operations, questions about safety will grow sharper - especially where confidential or controlled information is involved.

China Warns Government Staff Against Using OpenClaw AI Over Data Security Concerns #AIRisks #AISystems #ChatGPTOpenAI

0 0 0 0
Preview
U.S. Blacklists Anthropic as Supply Chain Risk as OpenAI Secures Pentagon AI Deal  The Trump administration has designated AI startup Anthropic as a supply chain risk to national security, ordering federal agencies to immediately stop using its AI model Claude.  The classification has historically been applied to foreign companies and marks a rare move against a U.S. technology firm.  President Donald Trump announced that agencies must cease use of Anthropic’s technology, allowing a six month phase out for departments heavily reliant on its systems, including the Department of War.  Defense Secretary Pete Hegseth later formalized the designation and said no contractor, supplier or partner doing business with the U.S. military may conduct commercial activity with Anthropic.  At the center of the dispute is Anthropic’s refusal to grant the Pentagon unrestricted access to Claude for what officials described as lawful purposes.  Chief executive Dario Amodei sought two exceptions covering mass domestic surveillance and the development of fully autonomous weapons.  He argued that current AI systems are not reliable enough for autonomous weapons deployment and warned that mass surveillance could violate Americans’ civil rights.  Anthropic has said a proposed compromise contract contained loopholes that could allow those safeguards to be bypassed.  The company had been operating under a 200 million dollar Department of War contract since June 2024 and was the first AI firm to deploy models on classified government networks.  After negotiations broke down, the Pentagon issued an ultimatum that Anthropic declined, leading to the blacklist.  The company plans to challenge the designation in court, arguing it may exceed the authority granted under federal law.  While the restriction applies directly to Defense Department related work, legal analysts say the move could create broader uncertainty across the technology sector.  Anthropic relies on cloud infrastructure from Amazon, Microsoft and Google, all of which maintain major defense contracts.  A strict interpretation of the order could complicate those relationships.  President Trump has warned of serious civil and criminal consequences if Anthropic does not cooperate during the transition.  Even as Anthropic faces federal restrictions, OpenAI has moved ahead with its own classified agreement with the Pentagon.  The company said Saturday that it had finalized a deal to deploy advanced AI systems within classified environments under a framework it describes as more restrictive than previous contracts.  In its official blog post, OpenAI said, "Yesterday we reached an agreement with the Pentagon for deploying advanced AI systems in classified environments, which we requested they also make available to all AI companies." It added, "We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s."  OpenAI outlined three red lines that prohibit the use of its technology for mass domestic surveillance, for directing autonomous weapons systems and for high stakes automated decision making.  The company said deployment will be cloud only and that it will retain control over its safety systems, with cleared engineers and researchers involved in oversight.  "We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections," the company wrote.  The contract references existing U.S. laws governing surveillance and military use of AI, including requirements for human oversight in certain weapons systems and restrictions on monitoring Americans’ private information.  OpenAI said it would not provide models without safety guardrails and could terminate the agreement if terms are violated, though it added that it does not expect that to happen.  Despite its dispute with Washington, Anthropic appears to be gaining traction among consumers.  Claude recently climbed to the top position in Apple’s U.S. App Store free rankings, overtaking OpenAI’s ChatGPT.  Data from SensorTower shows the app was outside the top 100 at the end of January but steadily rose through February.  A company spokesperson said daily signups have reached record levels this week, free users have increased more than 60 percent since January and paid subscriptions have more than doubled this year.

U.S. Blacklists Anthropic as Supply Chain Risk as OpenAI Secures Pentagon AI Deal #Anthropic #ArtificialIntelligence #ChatGPTOpenAI

0 0 0 0
Preview
ホビーちゃんねる Gatebox、AI美少女が接客する「AIバイト」勤務開始 大阪・戎橋のTSUTAYA #ガジェット #AIバイト #ChatGPTOpenAI #デジタル

Gatebox、AI美少女が接客する「AIバイト」勤務開始 大阪・戎橋のTSUTAYA
#ガジェット #AIバイト #ChatGPTOpenAI #デジタル

0 0 0 0