6/ You can take a look at the current software license agreements for #watchOS11 here. I’m only assuming it’s the same version as one that pops up on your device:
www.apple.com/legal/sla/do...
Posts by Dreyer
5/ Data sharing concerns: Apps on WatchOS like Maps and Wallet will share metadata (e.g., location info). Apple refers to its #privacy policy, but it’s not as transparent as it could be. Especially with Apple’s privacy focused marketing.
4/ Restrictions on creative freedom: Probably not Watch specific but features from Apple like #Memojis and Personal Voice are only for personal use. Using them publicly or commercially is apparently against the rules. Surprising and unclear how this is even enforceable.
3/ Safety not guaranteed: Apple Watch #safety features like fall detection are proudly advertised in marketing, but Apple makes no guarantees they’ll work and disclaims all liability if they don’t. Is this misrepresentation? Worth keeping in mind if you’re buying it for that express purpose.
2/ You don’t own the software: Apple only licenses the software to you on your Apple Watch. Although I’m not sure you can run anything other than #WatchOS so without the OS you don’t even have a watch. Limiting your ability to repair something you’ve bought is still crazy to me.
1/ Automatic updates: Even if you disable auto-updates on your Apple Watch, Apple can still install updates without explicit approval. Suggests it’s for #security but I wonder how this plays with UK #ConsumerRights Act if an update removes features which affects the definition of fit for purpose.
I’m late to the party as #Apple announced WatchOS 11 a few months ago but I only just perused the legalese on the poster (it was Christmas and I had time to kill). Here a few things I didn’t realise which surprised me. Might be interesting to others. 🧵
(3) Prioritize Evaluation: Establish robust test frameworks to define what success looks like, what acceptable behaviour will be, and choose smaller, efficient models when suitable. Never assume newer "frontier" models aren't always better they often come with regression in specific areas of perf.
(2) Groundedness is Key: AI models still struggle with long-context reasoning and bias in retrieval systems (RAG). Groundedness detection ensure accurate, context-relevant outputs. Importance of Trust but verify” remains the same, because AI models will get things wrong like an overeager intern.
(1) AI Safety Risks: AI models are vulnerable to hallucinations (or confabulation), prompt injections, and jailbreaks. Tools like Azure OpenAI’s content safety filters help mitigate these issues. Ignoring the Microsoft product demo angle, these are interesting countermeasures to consider in prod.
This video was interesting and worth a watch. Extra kudos for the live demos on how to exploit AI models in the wild.
www.youtube.com/watch?v=oFsL...
My top three takeaways: 🧵