The tech will improve. But without strong human involvement, the result will still feel empty, and the people behind it may get pushed out.
Posts by Arif Perdana
So the real question isn’t whether we accept AI. It’s how we keep humans in control. Regulation is still behind, copyright, creator protection, and standards aren’t clear. If that doesn’t catch up, tech will move faster than the people it affects.
There’s also a shift in how people see AI, from tool to “creator.” That’s risky. AI has no intent or experience. It just recombines patterns. Public reactions are split, and that’s fair. Some see opportunity, others worry about creative jobs.
But the bigger issue isn’t technical. It’s control. If humans still shape the story, emotion, and decisions, AI is powerful. If humans just prompt and step back, the output becomes generic.
Yes, there are technical limits. Generative video still struggles with consistency, motion, and subtle expressions. Models like Google Veo and OpenAI Sora are improving fast, so that part will likely be solved.
But they still feel alive because humans stay in control, directors, actors, editors. Tech supports the vision. What’s changing now is the balance. When the human role weakens, AI takes over too much. The result feels off, not just visually, but emotionally.
When I spoke to ABC News Australia about Legenda Bertuah, I made a simple point: AI isn’t new. Film has always used tech. CGI is standard. Movies like Avatar and Toy Story rely heavily on it.
www.abc.net.au/news/2026-04...
That is designing systems that reflect our ethical priorities, rather than hiding them behind mathematics.
See the full paper here: Algorithmic fairness in context: liberty, opportunity, and well-being as ethical anchors | AI and Ethics share.google/814Du5yXQgkd...
The real challenge is not choosing the “right” metric, but openly deciding whose risks count, which errors are acceptable, & why. Seen this way, algorithmic fairness becomes less about optimization and more about collective responsibility. ->
These harms include wrongful imprisonment, economic exclusion, or preventable illness. This means fairness cannot be universal or value-free. It must be anchored to what matters most in each domain: liberty, opportunity, or well-being. ->
What if we stopped treating fairness in AI as a single technical target & started treating it as a moral choice shaped by context? Across criminal justice, finance, & healthcare, the same algorithmic tools produce very different kinds of harm ->
Submission details
• Full chapter (≤8,000 words)
• Due: 31 March 2026
• APA 7th | British English
• Conceptual, empirical, comparative & doctrinal chapters welcome
See the details for the submission.
Who can contribute?
Academics, policymakers, practitioners, technologists, and experts in:
AI ethics, law, information systems, public policy, healthcare, education, & finance.
The volume explores fairness, transparency, accountability, trust, and human oversight, grounded in Asia’s diverse cultural, legal, and regulatory contexts.
Call for Book Chapters | Springer Edited Volume
Algorithmic Trust and Governance in Asia
We invite chapter contributions examining how AI-driven decision-making is governed across healthcare, education, and finance in Asia.
3/
4. Hyper-personalization, surveillance & public trust
5. Electoral manipulation in the Global South
6. Platform governance & algorithmic transparency
7. Citizen agency, digital literacy & counter-narratives
2/ We’re exploring how GenAI is disrupting journalism, public trust, and democratic norms. Topics include:
1. Journalism ethics & AI workflows
2. Algorithmic rec systems & echo chambers
3. AI-driven misinformation & credibility crises
1/ CALL FOR PAPERS
As AI reshapes newsrooms, who do we trust for reliable information?
Oxford University Press (OUP) is calling for papers for a special section of the Social Media project: News, Journalism, and Trust in the Age of Generative AI.
This is the link to the paper: ml-site.cdn-apple.com/papers/the-i...
4/ So, AI isn’t really “thinking” yet. It mimics the process, but can’t handle real complexity. Like a kid who memorizes formulas but panics when asked to think beyond the textbook.
3/ The result? LRMs perform well on moderately complex tasks, but for very easy or very hard ones, standard models do better. LRMs tend to “overthink” or stop thinking altogether when things get too complex.
2/ But as the puzzle gets harder, both end up failing, and strangely, the thoughtful one actually stops thinking earlier. That’s what this study found about LRMs. Using puzzles like the Tower of Hanoi, the researchers tested whether these models can truly reason.
1/ Large Reasoning Models (LRMs), advanced versions of ChatGPT is like asking two kids to solve a puzzle. The first one gives an answer right away without much thinking. The second tries to think it through, writing out each step. At first, the thoughtful one seems smarter.
Like a referee with no ego, AI gets the nod when customers mess up, it's quick, fair, and unemotional. But when the company’s at fault, customers want a human touch, someone who owns the mistake and says “sorry” like they mean it.
4/ It’s easier, quicker, and way less stressful for companies dealing with lots of data guests.
arxiv.org/abs/2503.01067
3/ It’s super flexible, anyone can arrive anytime, yet things still end up nicely organized by the end of the day. Basically, a data lakehouse mixes the easygoing vibe of a backyard hangout (data lake) with the structured planning of a formal party (data warehouse).
2/ This paper suggests using a "lakehouse" approach, similar to having an open, relaxed backyard barbecue. People (data) show up whenever, chill out anywhere first, and later you casually group them by common interests or conversations.
1/ Think about handling data like throwing a massive party. Traditional ways of organizing data are like carefully planning seats for each guest in advance, slow, inflexible, and stressful if unexpected guests show up.