True ultra long term storage with decent storage capacity could prefix each and every binary files with an ASCII text file detailing the file structure on the binary in enough detail that a competent programmer could write a parser.
If you were still worried about that, all binary files could be duplicated in ASCII (minus formatting etc.)
Any civilization advanced enough to read microscopic encoded data would be advanced enough to do basic statistical analysis on ASCII encoded English/whatever and work out what it is. The harder part is figuring out how to understand English.
An example of this being done recently is MIT students decoding the ancient language Ugaritic
300M years is a very long time. Unless there is some continuity in the use of the stored information, in a way it's documented, translated and reinterpreted, there is no hope something alive 300M years from now, something that will not be remotely human, will be able to understand it any more than we understand the songs sponges sing each other.
You could start with uncompressed bitmaps for images, and the pictures can explain all the necessary information. Like they do with the golden records on the voyager probes.
I don't think the information stored should be compressed. Or if it is, it should be very simple compression, like a huffman encoding. That way it wouldn't just look like random data. There would be patterns in it, and statistical analyses could reveal lots of information.
I mean circular, in that the definition's reference each other. E.g. if the definition for "happy" was "not sad" and the definition for "sad" was "not happy. If you didn't know one of those words already, the dictionary would be useless.
If you were still worried about that, all binary files could be duplicated in ASCII (minus formatting etc.)