Incremental loading is one of the things I dislike about the native cocoa text/document components - especially if you have a large file and want to scroll to the end. Scroll and wait. Scroll and wait. Scroll and wait.
You can use the same basic techniques presented in the original article for loading large files instantly on macOS (I did), and then use Core Text APIs for rendering the actual text instead of the Win32 API calls.
You need to use Core Text APIs over the NS equivalent, or at least you do if you want to handle text with long lines, because all the NS* eventually gets down to Core Text and one of the functions (I forget which one, but likely CTTypesetterCreateWithAttributedString) becomes noticeably sluggish when processing long lines (think > 10k characters per line), especially if most of that text might not even be appearing on the screen. At least it does for Chinese, not sure about other scripts.
However if you break that same 10k chunk up in to a bunch of smaller text chunks first, and stop once you get off the visible screen, the combined time of processing each chunk is significantly less than processing them as a single piece of text. Checking my source code, I've currently got the maximum length to split on set at 1,024 and that works quite well.
You can use the same basic techniques presented in the original article for loading large files instantly on macOS (I did), and then use Core Text APIs for rendering the actual text instead of the Win32 API calls.
You need to use Core Text APIs over the NS equivalent, or at least you do if you want to handle text with long lines, because all the NS* eventually gets down to Core Text and one of the functions (I forget which one, but likely CTTypesetterCreateWithAttributedString) becomes noticeably sluggish when processing long lines (think > 10k characters per line), especially if most of that text might not even be appearing on the screen. At least it does for Chinese, not sure about other scripts.
However if you break that same 10k chunk up in to a bunch of smaller text chunks first, and stop once you get off the visible screen, the combined time of processing each chunk is significantly less than processing them as a single piece of text. Checking my source code, I've currently got the maximum length to split on set at 1,024 and that works quite well.