(...) if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.
To be fair, they weren't necessarily talking about robo-soldiers there. AI parters could just as easily be benign things like a logistics management system, or the control software for the squad's BigDog, or the pervasive data-mining surveillance program sucking up every private detail of an enemy or friendly populace.
Ethics goes both ways. Human soldiers are capable of overruling unethical orders - however they are also fully capable of becoming enraged and committing massacres against explicit orders and rules of engagement.
If anything AI moves the onus of morality squarely on the politicians responsible for the war.
This seems to be closely related to the current development in mathematics / computer science regarding automatic proof systems:
It is no longer sufficient that the resulting formula or statement is correct. It is also important that the derivation of the result is returned as well, so that correctness can be checked through other means, e.g. humans or different/simpler programs.
There's something eerie about the sound of that.