Whether symbols are grounded by learning or evolution does not much matter to my theory; I happen to focus on learned categories, but the raw input we begin with is clearly already filtered and channeled considerably by evolution. It would be incorrect (and homuncular), however, to speak of a grounded system's ``interpreting'' the shapes of its symbols. If the symbols are grounded, then they are connected to and about what they are about independently of any interpretations we (outsiders) project on them, in virtue of the system's TTT interactions and capacity. But (as Searle points out in his commentary, and I of course agree), there may still be nobody home in the system, no mind, hence no meaning, in which case they would still not really be ``about'' anything at all, just, at best, TTT-connected to certain objects, events and states of affairs in the world. Grounding does not equal meaning, any more than TTT-capacity guarantees mind. And there is always the further possibility that symbol grounding is a red herring, because symbol systems are a red herring, and not much of whatever really underlies mentation is computational at all. The TTT would still survive if this were the case, but ``grounding'' would just reduce to robotic ``embeddedness'' and ``situatedness.''