I too am curious how 64 GB unified memory performs for training deep learning models. Even if speed isn't amazing, 64 GB is much greater than the 24 GB available in Nvidia's flagship consumer cards, which would allow for inputting larger images, bigger batch sizes, deeper networks etc. Also, will be interesting to see how all of the different cores are used.
No. This has been misreported. The RAM is on the same package but not part of the same silicon die. Basically the SoC is mounted next to the RAM on a carrier.