I have a feeling your definition of “reasoning” is different than others’. It seems like 1. You have assumed that it is a Chinese Room without considering whether humans are also Chinese rooms and what makes us different and 2. Tried to show that it cannot reason correctly by using math, an area it is well known to be particularly bad at, rather than considering a rather novel input in an area it is likely to be good at but has never seen before.
You might be interested in this article as a counter-example to ChatGPT being able to reason: https://thegradient.pub/othello/
I’m curious as to what arguments you’d make against their raven analogy, especially since you made a reference to animals in this piece as well.