Joseph Plazo’s Radical Take on the Limits of Intelligence—Artificial and Otherwise
Joseph Plazo’s Radical Take on the Limits of Intelligence—Artificial and Otherwise
Blog Article
Everyone expected triumph. What unfolded was a reckoning.
At the University of the Philippines' famed lecture theater, delegates from NUS, Kyoto, HKUST, and AIM assembled to witness the gospel of AI in finance.
They expected Plazo to hand them a blueprint to machine-driven wealth.
They were wrong.
---
### When a Maverick Started with a Paradox
Joseph Plazo is no stranger to accolades.
As he stepped onto the podium, the room went still.
“AI can beat the market. But only if you teach it when not to try.”
The note-taking paused.
That sentence wasn’t just provocative—it was prophetic.
---
### Dismantling the Myth of Machine Supremacy
There were no demos, no dashboards, no datasets.
He showed failures— neural nets falling apart under real-world pressure.
“Most models,” he said, “are just statistical mirrors of the past.
Then, with a pause that felt like a punch, he asked:
“Can your AI feel the fear of 2008? Not the charts. The *emotion*.”
You could hear a breath fall.
---
### But What About Conviction?
They didn’t sit quietly. These were doctoral minds.
A PhD student from Kyoto noted how large language models now detect emotion in text.
Plazo nodded. “Knowing someone’s angry doesn’t tell you why—or what comes next.”
A data scientist from HKUST proposed that probabilistic models could one day simulate conviction.
Plazo’s reply was metaphorical:
“You can simulate weather. But conviction? That’s lightning. You can’t forecast where it’ll strike. Only feel when it does.”
---
### The Real Problem Isn’t AI. It’s Us.
Plazo’s core argument wasn’t that AI is broken. It’s that humans are outsourcing responsibility.
“People are worshipping outputs like oracles.”
Yet his own firm uses AI—but wisely.
His company’s systems scan sentiment, order flow, and liquidity.
“But every output is double-checked by human eyes.”
And with grave calm, he said:
“‘The model told me to do it.’ That’s what we’ll hear after every disaster in the next decade.”
---
### The Warning That Cut Through the Code
Across Asian tech hubs, AI is gospel.
Dr. Anton Leung, a Singapore-based ethicist, whispered after the talk:
“He reminded us: tools without ethics are just sharp objects.”
In a private dialogue among professors, Plazo pressed the point:
“Don’t here just teach students to *code* AI. Teach them to *think* with it.”
---
### No Clapping. Just Standing.
The ending was elegiac, not technical.
“The market isn’t math,” he said. “ It’s a tragedy, a comedy, a thriller—written by humans. And if your AI can’t read character, it’ll miss the plot.”
No one moved.
Some said it reminded them of Jobs at Stanford.
He came to remind us: we are still responsible.