I don't object to what you've just written. I expect even if we had conscious robots that we programed with AI software and which are connected to sensors and aware of themselves (similar to boston dynamics humanoid and dog-like robots, if we also add in a large neural software brain), having them be conscious by obvious virtue of running software we developed/coded/used genetic algorithms on, wouldn't mean we understand that consciousness.
I'd say we would have the chance to have a much better understanding of that type of consciousness than we would our own, unless such an AI were to come about spontaneously from a long string of machine learning such that we don't have any clue about the inner machinations.