Advertisement · 728 × 90

Posts by Arif Perdana

The tech will improve. But without strong human involvement, the result will still feel empty, and the people behind it may get pushed out.

6 days ago 0 0 1 0

So the real question isn’t whether we accept AI. It’s how we keep humans in control. Regulation is still behind, copyright, creator protection, and standards aren’t clear. If that doesn’t catch up, tech will move faster than the people it affects.

6 days ago 0 0 1 0

There’s also a shift in how people see AI, from tool to “creator.” That’s risky. AI has no intent or experience. It just recombines patterns. Public reactions are split, and that’s fair. Some see opportunity, others worry about creative jobs.

6 days ago 0 0 1 0

But the bigger issue isn’t technical. It’s control. If humans still shape the story, emotion, and decisions, AI is powerful. If humans just prompt and step back, the output becomes generic.

6 days ago 1 0 2 0

Yes, there are technical limits. Generative video still struggles with consistency, motion, and subtle expressions. Models like Google Veo and OpenAI Sora are improving fast, so that part will likely be solved.

6 days ago 0 0 1 0

But they still feel alive because humans stay in control, directors, actors, editors. Tech supports the vision. What’s changing now is the balance. When the human role weakens, AI takes over too much. The result feels off, not just visually, but emotionally.

6 days ago 0 0 1 0
Preview
AI generated the animation for this TV show. Is it 'cool' or 'messed up'? A television show that uses generative artificial intelligence to animate Indonesian folktales is creating a stir.

When I spoke to ABC News Australia about Legenda Bertuah, I made a simple point: AI isn’t new. Film has always used tech. CGI is standard. Movies like Avatar and Toy Story rely heavily on it.

www.abc.net.au/news/2026-04...

6 days ago 0 0 1 0
Client Challenge

That is designing systems that reflect our ethical priorities, rather than hiding them behind mathematics.

See the full paper here: Algorithmic fairness in context: liberty, opportunity, and well-being as ethical anchors | AI and Ethics share.google/814Du5yXQgkd...

3 months ago 1 0 0 0
Advertisement

The real challenge is not choosing the “right” metric, but openly deciding whose risks count, which errors are acceptable, & why. Seen this way, algorithmic fairness becomes less about optimization and more about collective responsibility. ->

3 months ago 1 0 1 0

These harms include wrongful imprisonment, economic exclusion, or preventable illness. This means fairness cannot be universal or value-free. It must be anchored to what matters most in each domain: liberty, opportunity, or well-being. ->

3 months ago 0 0 1 0
Post image

What if we stopped treating fairness in AI as a single technical target & started treating it as a moral choice shaped by context? Across criminal justice, finance, & healthcare, the same algorithmic tools produce very different kinds of harm ->

3 months ago 1 0 1 0

Submission details
• Full chapter (≤8,000 words)
• Due: 31 March 2026
• APA 7th | British English
• Conceptual, empirical, comparative & doctrinal chapters welcome

See the details for the submission.

3 months ago 1 0 0 0

Who can contribute?
Academics, policymakers, practitioners, technologists, and experts in:
AI ethics, law, information systems, public policy, healthcare, education, & finance.

3 months ago 1 0 1 0

The volume explores fairness, transparency, accountability, trust, and human oversight, grounded in Asia’s diverse cultural, legal, and regulatory contexts.

3 months ago 0 0 1 0
Post image Post image Post image

Call for Book Chapters | Springer Edited Volume

Algorithmic Trust and Governance in Asia

We invite chapter contributions examining how AI-driven decision-making is governed across healthcare, education, and finance in Asia.

3 months ago 1 1 1 0
Preview
Oxford Intersections: Social Media in Society and Culture Abstract. This work will provide an integrated analysis of social media’s transformative and disruptive power across the global sociocultural landscape. Of

See the details here.

academic.oup.com/edited-volum...

8 months ago 1 0 0 0

3/
4. Hyper-personalization, surveillance & public trust

5. Electoral manipulation in the Global South

6. Platform governance & algorithmic transparency

7. Citizen agency, digital literacy & counter-narratives

8 months ago 1 0 1 0

2/ We’re exploring how GenAI is disrupting journalism, public trust, and democratic norms. Topics include:

1. Journalism ethics & AI workflows

2. Algorithmic rec systems & echo chambers

3. AI-driven misinformation & credibility crises

8 months ago 1 0 1 0
Advertisement

1/ CALL FOR PAPERS

As AI reshapes newsrooms, who do we trust for reliable information?

Oxford University Press (OUP) is calling for papers for a special section of the Social Media project: News, Journalism, and Trust in the Age of Generative AI.

8 months ago 1 0 1 0

This is the link to the paper: ml-site.cdn-apple.com/papers/the-i...

10 months ago 2 0 0 0

4/ So, AI isn’t really “thinking” yet. It mimics the process, but can’t handle real complexity. Like a kid who memorizes formulas but panics when asked to think beyond the textbook.

10 months ago 0 0 1 0

3/ The result? LRMs perform well on moderately complex tasks, but for very easy or very hard ones, standard models do better. LRMs tend to “overthink” or stop thinking altogether when things get too complex.

10 months ago 0 0 1 0

2/ But as the puzzle gets harder, both end up failing, and strangely, the thoughtful one actually stops thinking earlier. That’s what this study found about LRMs. Using puzzles like the Tower of Hanoi, the researchers tested whether these models can truly reason.

10 months ago 0 0 1 0
Post image

1/ Large Reasoning Models (LRMs), advanced versions of ChatGPT is like asking two kids to solve a puzzle. The first one gives an answer right away without much thinking. The second tries to think it through, writing out each step. At first, the thoughtful one seems smarter.

10 months ago 2 0 1 0
Service recovery by AI or human agents: Do failure and strategy context matter? | Emerald Insight 1

Here is the full paper: www.emerald.com/insight/cont...

11 months ago 2 0 0 0
Advertisement
Post image

Like a referee with no ego, AI gets the nod when customers mess up, it's quick, fair, and unemotional. But when the company’s at fault, customers want a human touch, someone who owns the mistake and says “sorry” like they mean it.

11 months ago 2 0 1 0
Preview
All Roads Lead to Likelihood: The Value of Reinforcement Learning in Fine-Tuning From a first-principles perspective, it may seem odd that the strongest results in foundation model fine-tuning (FT) are achieved via a relatively complex, two-stage training procedure. Specifically, ...

4/ It’s easier, quicker, and way less stressful for companies dealing with lots of data guests.

arxiv.org/abs/2503.01067

11 months ago 0 0 0 0

3/ It’s super flexible, anyone can arrive anytime, yet things still end up nicely organized by the end of the day. Basically, a data lakehouse mixes the easygoing vibe of a backyard hangout (data lake) with the structured planning of a formal party (data warehouse).

11 months ago 0 0 1 0

2/ This paper suggests using a "lakehouse" approach, similar to having an open, relaxed backyard barbecue. People (data) show up whenever, chill out anywhere first, and later you casually group them by common interests or conversations.

11 months ago 0 0 1 0
Post image

1/ Think about handling data like throwing a massive party. Traditional ways of organizing data are like carefully planning seats for each guest in advance, slow, inflexible, and stressful if unexpected guests show up.

11 months ago 2 0 1 0